The SDKs are automatically trained from labelled internet imagery, using sophisticated artificial intelligence algorithms. This often requires days of processing for the calculator to find out a mapping between the digital patterns in the image and the specific face information you are interested in.
This means that, in case of misclassification (eg, wrong age, gender, expressions etc..), our team won’t be able to know the exact reasons behind it, and will not be able to “fix it”. Although our software constantly improves, we do not have the fine control to change what the computer learned during its training.
However, most of the cases are solved by changing illumination or head pose: The SDK are trained to work optimally on frontal faces, with uniform illumination. Although the SDKs correct for slightly turned faces and non-uniform illumination, the results might be significantly affected, depending on the direction of the light source, distance and on the physiognomy of the subjects. Since changing the latter is not (usually) possible, playing around with the position of the camera might solve the problem.
Also remember: Depending on your location, the camera should disable automatic white balancing, focus, zoom, face tracking and most importantly should have the anti-flicker enabled. Flickering is seen by the SDK as movement, and as such, it significantly affect the results of the SDK.
Refer to https://sightcorp.zendesk.com/hc/en-us/articles/201050007-Camera-Settings-and-Positioning for hints on how to setup your camera.