Procter & Gamble partnered with science and research lab Parc to develop the Olay Skin Advisor (OSA) app, which is a skin analyst tool that offers users a personalized skin advisory service.
Previously: Embracing Tech For A Bright Beauty Future
Consumers are driving the need for computer vision technology in the beauty industry, in part, because of the uncertainty that accompanies shopping for skin care products. Research conducted by Proctor & Gamble shows 14% of women do not know what their skin care needs are, while 33% are unable to find what they need in large beauty retail stores.
Related: Study Gathers Consumer Insight on Beauty Tech
Advances in artificial intelligence techniques are growing the power and value of computer vision. Some of these techniques include machine learning, or deep learning, which uses multi-layer neural computing networks to infer/predict important information for images. The network is presented with example images and associated patterns, which it learns, and in turn can identify and react to what it sees.
Olay Skin Advisor app uses computer vision technology and deep learning. The user takes a selfie and the tool uses artificial intelligence to create an automated analysis of their individual skin type and age.
Related: Portrait AI Face Journal is Now SelfE Face Analysis
The image is checked for quality, taking into consideration the distance from the camera, lighting, facial expression, image sharpness and obstructions. If a new photo is needed, the tool will advise the user on how to improve the image
Deep learning is then used to analyze the image and different skin characteristics across the forehead, under the eyes, cheeks, nose and chin. It extracts skin image patches and trains a convolutional neural network on each region separately. Using data augmentation, each convolutional neural network model is then fine-tuned.
Artificial intelligence and deep learning technologies can analyze and evaluate big data to train different classes of operations and effectively distinguish between objects. These identification capabilities have gone from 50% to 99% accurate in less than a decade.