VMA uses voice instruction technologies to assist the user in makeup application. Users receive audio feedback and tips on whether their lipstick, eyeshadow or foundation is evenly applied, per the company.
VMA’s smart mirror technology is driven by the company's augmented reality and artificial intelligence capabilities and was developed using machine learning, per the company.
Using AI, VMA identifies makeup applied on a user’s face and assesses the uniformity and boundaries of application and coverage. VMA identifies any areas on the face that may require more accurate application and audibly describes where touch-ups may be needed, per the company.
The app’s design and development was informed by user research with people from the blind and low vision community to gain a deeper understanding of their unique needs, pain points, desires and preferences, per the company.
Michael Smith, chief information officer, ELC, commented: "“This is one exciting step in an ongoing journey toward ever-greater inclusivity; future versions of our VMA app will offer expanded services that leverage AI to bring the experience of independently using beauty products to millions of people that are low vision and blind."