We collaborated closely with Google to prototype the Androidify experience and define how the app integrates Gemini, Imagen, and ML Kit. This collaboration helped shape an open-source demo that showcases how structured AI outputs, multimodal prompts, and on-device machine learning can be combined to create avatars that feel personal, expressive, and fun.
A key part of our contribution was the fine-tuning of the Imagen model specifically for Androidify. We led the effort to transform a general-purpose text-to-image model into one capable of producing playful, characterful Android bots that align with the visual identity of the project.

Using LoRA (Low-Rank Adaptation), we trained the model on a diverse collection of Android bot assets with different colors, accessories, clothing, and stylistic variations. This work included generating custom image and text pairs to build a rich training dataset and running a complete fine-tuning pipeline that balanced efficiency with expressive output quality. The resulting model has a deeper understanding of the Androidify style, from hairstyles and sunglasses to the distinctive proportions that make each bot feel unique.
Both the open-source demo and the fine-tuned model are already available to the public, along with a custom Firebase AI Logic SDK. You can try the experience directly at https://androidify.com.
The impact goes beyond visual quality: it demonstrates how generative AI can feel more intentional, delightful, and aligned with a clear creative direction.