
AI/ML


Visual Pathologies Detection
The visual system of humans is not fully developed at birth and gradually improves during the first few years of life. World Health Organization estimates there are about 19 million children in the world suffering from visual impairment. However, 70–80% of them could prevent or treat it.
Using an eye tracker with the patient, we could know which point of the screen that the patient is looking at, but analyzing gaze data of the patient is so complicated, that is why we choose machine learning to help ophthalmologists to diagnose visual pathologies. We could collect data by performing various tests on the patients, and the ophthalmologist supervises this operation.
We used that gaze data to feed into the ML model, and some tests also made use of the ASA (Accelerated Stochastic Approximation) algorithm to provide meaningful constants before using in the ML model. Outputs of the model are the probability of abnormalities for each eye and each pathology.
Reference project: Track AI


Hand Keypoints
Lightweight model hand keypoints detector that mainly aimed to use on mobile web browsers. Hand keypoints detection has numerous applications in human interactive tasks e.g. augmented/virtual reality. Our hand keypoints detector model uses 2D RGB image as an input and predicts 21 human hand joints as an output.




Guitar Orientation
Lightweight model to predict guitar and microphone rotation within an image (or video frame) .
Normally, achieving object orientation from the 2D image is category specific. Each object has their own method, difficulties, limitations and has to re-do everything from start to fit each object.
Our model performs predicting 2D object rotation from an input image of guitar or microphone with only one model and in real-time and still possible to work with other types of object by leaning with their features.
Refence project: StageLab

Microphone Orientation
Lightweight model to predict guitar and microphone rotation within an image (or video frame) .
Normally, achieving object orientation from the 2D image is category specific. Each object has their own method, difficulties, limitations and has to re-do everything from start to fit each object.
Our model performs predicting 2D object rotation from an input image of guitar or microphone with only one model and in real-time and still possible to work with other types of object by leaning with their features.
Refence project: StageLab

Hand Orientation
Lightweight model to predict hand rotation and gesture in one stage. Both in hand orientation and gesture classification. The model works as a final part for hand gesture related systems by using cropped hand images as an input. By the minimalistic of the model, It can perform multi hands prediction in the same time up to 32 hands in the scene and still works in real-time along with the hand segmentation model.