In this sketch I have tried to implement an draft for the way of coding my idea from their mockups. Using the initial question as the example, i have split the canvas into 3 equal parts, starting with section one; from 0 pixels to 425 pixels (1/3rd) section two; from 425px-855px and so on to total the dimensions of my project: 1280×720 (anything higher would unfortunately decrease frame rate performance drastically).
The face detection’s box ‘y’ coordinate would be recorded in realtime and compared to the three pixel ranges mentioned above (ie left ‘Male’, middle ‘Neutral’ and right ‘Female’)
Once the face detection box entered the left or right pixel range, the alpha image overlayed on the live camera footage would change to its respective side. For example if the user moved to the left, the overlay would change from ‘neutral.png’ to ‘left.png’ where left.png is the same image, only with an underscore (as shown in my mockups)
This idea of using ranges based on the amount of answers possible, for example 8 ranges for 8 answers (and 8 seperate .png’s with the only unique thing being an underscore under each answer) will be used for each of the 3 main questions.
With my basic knowledge in Face Detection and in Java, i tried what i though was the most basic way to do this in code
But it threw a lot of errors