In order to achieve this, we had to detect and track an AR anchor in the camera’s field of view. While a 2D image can serve as an AR anchor, it must have distinguishable features that can be extracted to detect its position and calculate its orientation.
Due to their rich features and predefined shape and color, QR codes can be detected without consuming many resources. With this in mind, we decided to utilize QR codes and their four corner points to determine their orientation. By doing so, we were able to display our AR content as long as the camera could see the QR code. The resulting SDK allowed for the detection of AR anchors, leading to an increase in FPS and an enhanced voting experience for users. Additionally, we promoted the event through Out-of-Home banners.
In this project, we implemented the Kalman filter, which is an optimal estimation algorithm, using various measurements from the AR SDK to provide the best result. This resulted in smoother and higher quality AR content, which was more flexible to add filters to and supported 3D with animation.
WHAT’S NEXT?
Looking ahead, what we have built for this project provides a solid foundation for creating AR experiences on the web. The Detector and Renderer have been designed to operate independently. Therefore, when we build a new detector in the future, such as a face keypoint detector using a FaceMesh model, we can use the key points as an AR anchor to display AR content on the user’s face.