patents.google.com

US10818007B2 - Systems and methods for determining apparent skin age - Google Patents

  • ️Tue Oct 27 2020

US10818007B2 - Systems and methods for determining apparent skin age - Google Patents

Systems and methods for determining apparent skin age Download PDF

Info

Publication number
US10818007B2
US10818007B2 US15/993,950 US201815993950A US10818007B2 US 10818007 B2 US10818007 B2 US 10818007B2 US 201815993950 A US201815993950 A US 201815993950A US 10818007 B2 US10818007 B2 US 10818007B2 Authority
US
United States
Prior art keywords
image
skin age
age
neural network
cnn
Prior art date
2017-05-31
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires 2038-11-12
Application number
US15/993,950
Other versions
US20180350071A1 (en
Inventor
Ankur Purwar
Paul Jonathan Matts
Matthew Adam Shreve
Wencheng Wu
Beilei Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Procter and Gamble Co
Xerox Corp
Original Assignee
Procter and Gamble Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2017-05-31
Filing date
2018-05-31
Publication date
2020-10-27
2018-05-31 Application filed by Procter and Gamble Co filed Critical Procter and Gamble Co
2018-05-31 Priority to US15/993,950 priority Critical patent/US10818007B2/en
2018-12-06 Publication of US20180350071A1 publication Critical patent/US20180350071A1/en
2019-11-27 Assigned to THE PROCTER & GAMBLE COMPANY reassignment THE PROCTER & GAMBLE COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PURWAR, ANKUR NMN, MATTS, PAUL JONATHAN
2019-11-27 Assigned to PALO ALTO RESEARCH CENTER INCORPORATED reassignment PALO ALTO RESEARCH CENTER INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XU, BEILEI NMN, SHREVE, MATTHEW ADAM, WU, WENCHENG NMN
2020-10-27 Application granted granted Critical
2020-10-27 Publication of US10818007B2 publication Critical patent/US10818007B2/en
2023-06-20 Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PALO ALTO RESEARCH CENTER INCORPORATED
2023-06-22 Assigned to CITIBANK, N.A., AS COLLATERAL AGENT reassignment CITIBANK, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XEROX CORPORATION
2023-06-28 Assigned to XEROX CORPORATION reassignment XEROX CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVAL OF US PATENTS 9356603, 10026651, 10626048 AND INCLUSION OF US PATENT 7167871 PREVIOUSLY RECORDED ON REEL 064038 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: PALO ALTO RESEARCH CENTER INCORPORATED
2023-11-20 Assigned to JEFFERIES FINANCE LLC, AS COLLATERAL AGENT reassignment JEFFERIES FINANCE LLC, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XEROX CORPORATION
2024-02-13 Assigned to CITIBANK, N.A., AS COLLATERAL AGENT reassignment CITIBANK, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XEROX CORPORATION
2024-02-13 Assigned to XEROX CORPORATION reassignment XEROX CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS RECORDED AT RF 064760/0389 Assignors: CITIBANK, N.A., AS COLLATERAL AGENT
Status Active legal-status Critical Current
2038-11-12 Adjusted expiration legal-status Critical

Links

  • 238000000034 method Methods 0.000 title claims abstract description 47
  • 238000013527 convolutional neural network Methods 0.000 claims abstract description 83
  • 230000001815 facial effect Effects 0.000 claims abstract description 26
  • 238000012545 processing Methods 0.000 claims description 52
  • 238000004458 analytical method Methods 0.000 claims description 37
  • 230000000873 masking effect Effects 0.000 claims description 16
  • 230000008569 process Effects 0.000 claims description 14
  • 238000013528 artificial neural network Methods 0.000 claims description 13
  • 238000012549 training Methods 0.000 description 20
  • 210000001061 forehead Anatomy 0.000 description 11
  • 210000001331 nose Anatomy 0.000 description 10
  • 230000004044 response Effects 0.000 description 10
  • 210000001508 eye Anatomy 0.000 description 9
  • 210000000214 mouth Anatomy 0.000 description 8
  • 210000004209 hair Anatomy 0.000 description 7
  • 238000010191 image analysis Methods 0.000 description 7
  • 230000037303 wrinkles Effects 0.000 description 7
  • 238000004891 communication Methods 0.000 description 5
  • 230000003796 beauty Effects 0.000 description 4
  • 239000002537 cosmetic Substances 0.000 description 4
  • 210000005069 ears Anatomy 0.000 description 4
  • 210000004709 eyebrow Anatomy 0.000 description 4
  • 210000000887 face Anatomy 0.000 description 4
  • 210000002569 neuron Anatomy 0.000 description 4
  • 230000008447 perception Effects 0.000 description 4
  • 239000004909 Moisturizer Substances 0.000 description 3
  • 208000012641 Pigmentation disease Diseases 0.000 description 3
  • 238000004422 calculation algorithm Methods 0.000 description 3
  • 238000005094 computer simulation Methods 0.000 description 3
  • 230000006870 function Effects 0.000 description 3
  • 230000001333 moisturizer Effects 0.000 description 3
  • 238000013500 data storage Methods 0.000 description 2
  • 239000007933 dermal patch Substances 0.000 description 2
  • 238000001514 detection method Methods 0.000 description 2
  • 238000010586 diagram Methods 0.000 description 2
  • 238000012986 modification Methods 0.000 description 2
  • 230000004048 modification Effects 0.000 description 2
  • 230000006855 networking Effects 0.000 description 2
  • 238000007781 pre-processing Methods 0.000 description 2
  • 230000011218 segmentation Effects 0.000 description 2
  • 230000009759 skin aging Effects 0.000 description 2
  • 230000000475 sunscreen effect Effects 0.000 description 2
  • 239000000516 sunscreening agent Substances 0.000 description 2
  • 238000011282 treatment Methods 0.000 description 2
  • 206010013786 Dry skin Diseases 0.000 description 1
  • 230000032683 aging Effects 0.000 description 1
  • 230000004075 alteration Effects 0.000 description 1
  • 238000013459 approach Methods 0.000 description 1
  • 230000008901 benefit Effects 0.000 description 1
  • 230000008859 change Effects 0.000 description 1
  • 210000003467 cheek Anatomy 0.000 description 1
  • 239000003086 colorant Substances 0.000 description 1
  • 238000004590 computer program Methods 0.000 description 1
  • 239000008278 cosmetic cream Substances 0.000 description 1
  • 238000002316 cosmetic surgery Methods 0.000 description 1
  • 239000006071 cream Substances 0.000 description 1
  • 238000013135 deep learning Methods 0.000 description 1
  • 230000007123 defense Effects 0.000 description 1
  • 230000008034 disappearance Effects 0.000 description 1
  • 230000037336 dry skin Effects 0.000 description 1
  • 230000007613 environmental effect Effects 0.000 description 1
  • 230000003810 hyperpigmentation Effects 0.000 description 1
  • 208000000069 hyperpigmentation Diseases 0.000 description 1
  • 238000003709 image segmentation Methods 0.000 description 1
  • 238000000126 in silico method Methods 0.000 description 1
  • 239000006210 lotion Substances 0.000 description 1
  • 238000004519 manufacturing process Methods 0.000 description 1
  • 238000010295 mobile communication Methods 0.000 description 1
  • 210000000056 organ Anatomy 0.000 description 1
  • 230000019612 pigmentation Effects 0.000 description 1
  • 239000011148 porous material Substances 0.000 description 1
  • 230000000644 propagated effect Effects 0.000 description 1
  • 238000005070 sampling Methods 0.000 description 1
  • 230000037075 skin appearance Effects 0.000 description 1
  • 208000017520 skin disease Diseases 0.000 description 1
  • 230000036555 skin type Effects 0.000 description 1
  • 210000000857 visual cortex Anatomy 0.000 description 1
  • 230000000007 visual effect Effects 0.000 description 1
  • 238000012800 visualization Methods 0.000 description 1

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • G06K9/00228
    • G06K9/00281
    • G06K9/00362
    • G06K9/2054
    • G06K9/3233
    • G06K9/346
    • G06K9/4628
    • G06K9/6261
    • G06K9/6274
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • G06K2009/00322
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates generally to systems and methods for determining the apparent age of a person's skin. More specifically, the present application relates to the use of image processing techniques and one or more convolutional neural networks to more accurately determine the age of a consumer's skin.
  • Skin is the first line of defense against environmental insults that would otherwise damage sensitive underlying tissue and organs. Additionally, skin plays a key role in the physical appearance of a person. Generally, most people desire younger, healthy looking skin. And to some, the tell-tale signs of skin aging such as thinning skin, wrinkles, and age spots are an undesirable reminder of the disappearance of youth. As a result, treating the signs of skin aging has become a booming business in youth-conscious societies. Treatments range from cosmetic creams and moisturizers to various forms of cosmetic surgery.
  • the systems and methods utilize a computing device to process an image of a person, which depicts the person's face, and then analyze the processed image. During processing, the face of the person is identified in the image and facial macro features are masked. The processed image is analyzed. Determining the apparent skin age may include identifying at least one pixel that is indicative of skin age and utilizing the at least one pixel to provide the apparent skin age. Based on the analysis by the CNN and, optionally, other data provided by a user, the system can determine an apparent skin age of a person and/or provide a skin care product or skin care regimen for the person.
  • FIG. 1 depicts an example of the present system.
  • FIG. 2 depicts macro features identified in an image of a person.
  • FIG. 3A depicts a segmented image.
  • FIG. 3B depicts a bounded image.
  • FIGS. 4A to 4G depict masked macro features.
  • FIGS. 5A to 5G depict masked macro features.
  • FIGS. 6A to 6G depict regions of interest.
  • FIG. 7 is a flow diagram of a method of processing an image.
  • FIGS. 8 and 9 depict a convolutional neural network for determining apparent skin age.
  • FIGS. 10 to 21 depict exemplary user interfaces.
  • FIG. 22 illustrates a remote computing device for providing skin care product and/or regimen recommendations.
  • FIG. 23 is a flow diagram of a method of providing a product recommendation to a user.
  • a variety of systems and methods have been used in the cosmetics industry to provide customized product recommendations to consumers.
  • some well-known systems use a macro feature-based analysis in which one or more macro features commonly visible in a photograph of a person's face (e.g., eyes, ears, nose, mouth, and/or hair) are detected in a captured image such as a digital photograph or “selfie” and compared to a predefined definition.
  • macro-feature based analysis systems may not provide a suitably accurate indication of apparent skin age.
  • Conventional micro feature based systems can employ cumbersome equipment or techniques, which may not be suitable for use by the average consumer.
  • CNN convolutional neural network
  • “About,” as used herein, modifies a particular value, by referring to a range equal to the particular value, plus or minus twenty percent (+/ ⁇ 20%) or less (e.g., less than 15%, 10%, or even less than 5%).
  • “Apparent skin age” means the age of a person's skin calculated by the system herein, based on a captured image.
  • Convolutional neural network is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.
  • Coupled when referring to various components of the system herein, means that the components are in electrical, electronic, and/or mechanical communication with one another.
  • “Disposed” means an element is positioned in a particular place relative to another element.
  • Image capture device means a device such as a digital camera capable of capturing an image of a person.
  • “Joined” means configurations whereby an element is directly secured to another element by affixing the element directly to the other element, and configurations whereby an element is indirectly secured to another element by affixing the element to intermediate member(s) that in turn are affixed to the other element.
  • Macro features are relatively large bodily features found on or near the face of a human. Macro features include, without limitation, face shape, ears, eyes, mouth, nose, hair, and eyebrows.
  • “Masking” refers the process of digitally replacing at least some of the pixels disposed in and/or proximate to a macro feature in an image with pixels that have an RGB value closer to or the same as pixels disposed in a region of interest.
  • Micro features are relatively small features commonly associated with aging skin and/or skin disorders found on the face of a human. Micro features include, without limitation, fine line, wrinkles, dry skin features (e.g., skin flakes), and pigmentation disorders (e.g., hyperpigmentation conditions). Micro features do not include macro features.
  • “Person” means a human being.
  • “Region of interest” or “RoI” means a specifically bounded portion of skin in an image or image segment where analysis by a CNN is desired to provide an apparent skin age.
  • Some nonlimiting examples of a region of interest include a portion of an image depicting the forehead, cheek, nasolabial fold, under-eye area, or chin in which the macro features have been masked.
  • Segmenting refers to dividing an image into two or more discrete zones for analysis.
  • Target skin age means a skin age that is a predetermined number of years different from the apparent skin age.
  • User herein refers to a person who uses at least the features provided herein, including, for example, a device user, a product user, a system user, and the like.
  • the systems and methods herein utilize a multi-step (e.g., 2, 3, 4, or more steps) approach to determine the apparent skin age of a person from an image of that person.
  • a multi-step process instead of a single-step process, in which the CNN processes and analyzes the image or analyzes a full-face image, the CNN can focus on the important features that drive age perception (e.g., micro features) and reduce the computing power needed to analyze the image and reduce the bias that may be introduced to the system by macro features.
  • processing logic stored in a memory component of the system causes the system to perform one or more (e.g., all) of the following: identify a face in the image for analysis, normalize the image, mask one or more (e.g., all) facial macro-features on the identified face, and segment the image for analysis.
  • the processing steps may be performed in any order, as desired.
  • the processed image is provided to a convolutional neural network as one or more input variants for analysis.
  • the results of the CNN analysis are used to provide an apparent skin age of each segment and/or an overall skin age for the entire face.
  • FIG. 1 depicts an exemplary system 10 for capturing an image of a person, analyzing the image, determining the skin age of the person, and, optionally, providing a customized skin care regimen and/or product recommendation to a user.
  • the system 10 may include a network 100 (e.g., a wide area network such as a mobile telephone network, a public switched telephone network, a satellite network, and/or the internet; a local area network such as wireless-fidelity, Wi-Max, ZigBeeTM, and/or BluetoothTM; and/or other suitable forms of networking capabilities). Coupled to the network 100 are a mobile computing device 102 , a remote computing device 104 , and a training computing device 108 .
  • a network 100 e.g., a wide area network such as a mobile telephone network, a public switched telephone network, a satellite network, and/or the internet; a local area network such as wireless-fidelity, Wi-Max, ZigBeeTM, and/or BluetoothTM; and/or other suitable forms of networking
  • the mobile computing device 102 may be a mobile telephone, a tablet, a laptop, a personal digital assistant and/or other computing device configured for capturing, storing, and/or transferring an image such as a digital photograph. Accordingly, the mobile computing device 102 may include an image capture device 103 such as a digital camera and/or may be configured to receive images from other devices.
  • the mobile computing device 102 may include a memory component 140 a , which stores image capture logic 144 a and interface logic 144 b .
  • the memory component 140 a may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware.
  • the image capture logic 144 a and the interface logic 144 b may include software components, hardware circuitry, firmware, and/or other computing infrastructure.
  • the image capture logic 144 a may facilitate capturing, storing, preprocessing, analyzing, transferring, and/or performing other functions on a digital image of a user.
  • the interface logic 144 b may be configured for providing one or more user interfaces to the user, which may include questions, options, and the like.
  • the mobile computing device 102 may also be configured for communicating with other computing devices via the network 100 .
  • the remote computing device 104 may also be coupled to the network 100 and may be configured as a server (or plurality of servers), personal computer, mobile computer, and/or other computing device configured for creating, storing, and/or training a convolutional neural network capable of determining the skin age of a user by locating and analyzing skin features that contribute to skin age in a captured image of the user's face.
  • the CNN may be stored as logic 144 c and 144 d in the memory component 140 b of a remote computing device 104 .
  • the remote computing device 104 may include a memory component 140 b that stores training logic 144 c , analyzing logic 144 d , and/or processing logic 144 e .
  • the memory component 140 b may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware.
  • the training logic 144 c , analyzing logic 144 d , and/or processing logic 144 e may include software components, hardware circuitry, firmware, and/or other computing infrastructure. Training logic 144 c facilitates creation and/or training of the CNN, and thus may facilitate creation of and/or operation of the CNN.
  • Processing logic 144 e causes the image received from the mobile computing device 102 (or other computing device) to be processed for analysis by the analyzing logic 144 d .
  • Image processing may include macro feature identification, masking, segmentation, and/or other image alteration processes, which are described in more detail below.
  • Analyzing logic 144 d causes the remote computing device 104 to analyze the processed image to provide an apparent skin age, product recommendation, etc.
  • a training computing device 108 may be coupled to the network 100 to facilitate training of the CNN.
  • a trainer may provide one or more digital images of a face or skin to the CNN via the training computing device 108 .
  • the trainer may also provide information and other instructions (e.g., actual age) to inform the CNN which assessments are correct and which assessments are not correct. Based on the input from the trainer, the CNN may automatically adapt, as described in more detail below.
  • the system 10 may also include a kiosk computing device 106 , which may operate similar to the mobile computing device 102 , but may also be able to dispense one or more products and/or receive payment in the form of cash or electronic transactions.
  • a mobile computing device 102 which also provides payment and/or production dispensing, is contemplated herein.
  • the kiosk computing device 106 and/or mobile computing device 102 may also be configured to facilitate training of the CNN.
  • the hardware and software depicted and/or described for the mobile computing device 102 and the remote computing device 104 may be included in the kiosk computing device 106 , the training computing device 108 , and/or other devices.
  • the hardware and software depicted and/or described for the remote computing device 2104 in FIG. 21 may be included in one or more of the mobile computing device 102 , the remote computing device 104 , the kiosk computing device 106 , and the training computing device 108 .
  • remote computing device 104 is depicted in FIG. 1 as performing the image processing and image analysis, this is merely an example.
  • the image processing and/or image analysis may be performed by any suitable computing device, as desired.
  • the present system receives an image containing at least one face of person and prepares the image for analysis by the CNN.
  • the image may be received from any suitable source, such as, for example, a smartphone comprising a digital camera. It may be desirable to use a camera capable of producing at least a one megapixel image and electronically transferring the image to a computing device(s) that can access suitable image processing logic and/or image analyzing logic.
  • the processing logic identifies the portion(s) of the image that contain a human face.
  • the processing logic can be configured to detect the human face(s) present in the image using any suitable technique known in the art, such as, for example, color and/or color contrast techniques, removal of monochrome background features, edge-based techniques that use geometric models or Hausdorff distance, weak cascade techniques, or a combination of these.
  • any suitable technique known in the art such as, for example, color and/or color contrast techniques, removal of monochrome background features, edge-based techniques that use geometric models or Hausdorff distance, weak cascade techniques, or a combination of these.
  • it may be particularly desirable to use a Viola-Jones type of weak cascade technique which was described by Paul Viola and Michael Jones in “International Journal of Computer Vision” 57(2), 137-154, 2004.
  • an image received by the present system may contain more than one face, but a user may not want to analyze all of the faces in the image.
  • the user may only want to analyze the face of the person seeking advice related to a skin care treatment and/or product.
  • the present system may be configured to select only the desired image(s) for analysis.
  • the processing logic may select the dominant face for analysis based on the relative position of the face in the image (e.g., center), the relative size of face (e.g., largest “rectangle”), or a combination of these.
  • the present system may query the user to confirm that the face selected by the processing logic is correct and/or ask the user to select one or more faces for analysis. Any suitable user interface technique known in the art may be used to query a user and/or enable the user to select one or more faces present in the image.
  • the processing logic detects one or more facial landmarks (e.g., eyes, nose, mouth, or portions thereof), which may be used as anchor features (i.e., reference points that the processing logic can use to normalize and/or segment the image).
  • the processing logic may create a bounding box that isolates the face from the rest of the image. In this way, background objects, undesirable macro features, and/or other body parts that are visible in the image can be removed.
  • the facial landmarks of interest may be detected using a known landmark detection technique (e.g., Viola-Jones or a facial shape/size recognition algorithm).
  • FIG. 2 illustrates an example of a landmark detection technique in which the eyes 202 , nose 204 , and corners of the mouth 206 are identified by the processing logic for use as anchor features.
  • normalizing the image may include rotating the image and/or scaling the size of the image to reduce variability between images.
  • FIG. 3A illustrates an example of segmenting an image 300 into discrete zones for subsequent processing and/or analysis.
  • the segmented image 300 may be presented to a user via the mobile computing device, as illustrated in FIG. 3A .
  • the segmented image may just be part of the image processing and not displayed to a user.
  • the image is separated into 6 segments that include a forehead segment 301 , a left and a right eye segment 302 and 303 , a left and a right cheek/nasolabial fold segment 304 and 305 , and a chin segment 306 .
  • the image may be segmented and/or two or more segments combined to reflect zones that are commonly used to analyze skin in the cosmetics industry, such as, for example, the so-called T-zone or U-zone.
  • the T-zone is generally recognized in the cosmetics industry as the portion of the face that extends laterally across the forehead and longitudinally from about the middle of the forehead to the end of the nose or to the bottom of the chin.
  • the T-zone is so named because it resembles an upper-case letter T.
  • the U-zone is generally recognized as the portion of the face that extends longitudinally down one cheek, laterally across the chin, and then back up (longitudinally) to the other cheek.
  • the U-zone is so named because it resembles the letter U.
  • Facial segmentation may be performed, for example, by a tasks constrained deep convolutional network (TCDCN) or other suitable technique, as known to those skilled in the art.
  • TCDCN tasks constrained deep convolutional network
  • Segmenting the facial image allows the analyzing logic to provide an apparent age for each segment, which can be important because some segments are known to impact overall skin age perception more than other segments. Thus, each segment may be weighted to reflect the influence that segment has on the perception of skin age.
  • the processing logic may cause the system to scale the segmented image such that the full height of the facial image (i.e., distance from the bottom of the chin to the top of the forehead) does not exceed a particular value (e.g., between 700 and 800 pixels, between 700 and 750 pixels, or even about 716 pixels).
  • FIG. 3B illustrates an example of bounding an image 310 in a bounding box 320 .
  • the bounding box 320 may extend longitudinally from the bottom of the chin to the top of the forehead, and laterally from one temple to the other temple.
  • the bounding box 320 may be sized to remove background objects, macro features or a portion thereof (e.g., hair, ears), and/or all or a portion of other bodily objects that may be present in the image (e.g., neck, chest, shoulders, arms, or forehead). Of course, bounding boxes of all sizes are contemplated herein. Bounding may occur before, after, or at the same time as image segmentation. In some instances, the bounding box 320 and/or bounded image 310 may be presented to a user via the mobile computing device, as illustrated in FIG. 3B , but need not necessarily be so.
  • the CNN may learn to predict the skin age of a person from macro feature cues rather than micro features cues such as fine lines and wrinkles, which are known to be much more influential on how people perceive skin age. This can be demonstrated by digitally altering an image to remove facial micro features such as fine lines, wrinkles, and pigmentation disorders, and observing that the apparent age provided by the system does not change. Masking may occur before and/or after the image is segmented and/or bounded.
  • masking may be accomplished by replacing the pixels in a facial macro feature with pixels that have a uniform, non-zero (i.e., black), non-255 (i.e., white) RGB value. For example, it may be desirable to replace the pixels in the macro feature with pixels that have a median RGB value of the skin in the region of interest. It is believed, without being limited by theory, that by masking the facial macro features with uniformly colored pixels or otherwise nondescript pixels, the CNN will learn to predict age using features other than the macro features (e.g., facial micro features such fine lines and wrinkles). Masking herein may be accomplished using any suitable masking means known in the art, such as, for example, Matlab® brand computer software.
  • a sophisticated convolutional neural network may still learn to predict skin age based on “phantom” macro features.
  • the neural network may still learn to recognize differences in the patterns of median RGB pixels because the patterns generally correspond to the size and/or position of the masked facial macro feature.
  • the CNN may then apply the pattern differences to its age prediction analysis. To avoid this problem, it is important to use more than one input variant (e.g., 2, 3, 4, 5, or 6, or more) of the processed image to the CNN.
  • more than one input variant e.g., 2, 3, 4, 5, or 6, or more
  • FIGS. 4A to 4G illustrate an example of a first input variant in which an image of the face is segmented into six discrete zones and then the macro features in each segment are masked to provide the desired region of interest.
  • the processing logic causes all pixels associated with a macro feature (e.g., eye, nose, mouth, ear, eyebrow, hair) in the image segment to be filled in with the median RGB color space value of all pixels located in the relevant region of interest (e.g., a portion of the image that does not include a macro feature).
  • FIG. 4B illustrates how the eyes and/or eyebrows are masked in a forehead segment to provide a forehead RoI.
  • FIGS. 4C and 4D illustrate how the eyes, eyebrows, nose, cheeks, and hair are masked in an under-eye segment to provide an under-eye RoI.
  • FIGS. 4E and 4F illustrate how the nose, mouth, and hair are masked in a cheek/nasolabial fold segment to provide a cheek/nasolabial fold RoI.
  • FIG. 4G illustrates how the mouth is masked in a chin segment to provide a chin RoI.
  • FIG. 4A illustrates how the masked features would appear on an image of the entire face when the individually masked segments are combined and the background features are removed by a bounding box.
  • FIGS. 5A to 5G illustrate an example of a second input variant.
  • the processing logic causes all pixels associated with a macro feature in the unsegmented image of the face (“full-face”) to be filled in with the median RGB value of all pixels disposed in a region of interest.
  • the processing logic may then cause the masked, full-face image to be segmented, e.g., as described above.
  • FIG. 5A illustrates a full-face image in which certain macro features are masked and the background features are removed by a bounding box.
  • FIGS. 5B to 5G illustrate how each region of interest appears when the masked, full-face image of FIG. 5A is segmented.
  • FIGS. 4A to 4G are compared to their counterparts in FIGS. 5A to 5G , both the full-face images and the individual regions of interest differ somewhat from one another.
  • FIGS. 6A to 6G illustrate an example of a third input variant.
  • the processing logic causes the system to identify regions of interest in the full-face image, and then segment the image into six discrete zones comprising the regions of interest.
  • FIG. 6A illustrates a full-face image in which the nose is used as an anchor feature and the six image segments are identified.
  • FIGS. 6B-6G illustrate a region of interest extracted from each image segment.
  • FIG. 6B depicts a forehead RoI
  • FIGS. 6C and 6D each depict an under-eye RoI
  • FIGS. 6E and 6F each depict a cheek/nasolabial fold RoI
  • FIG. 6G depicts a chin RoI.
  • FIG. 7 illustrates the image processing flow path 700 for the methods and systems herein.
  • an image is received by the system.
  • processing logic causes one or more faces in the received image to be detected or selected for further processing.
  • processing logic causes landmark features to be detected in the detected or selected face.
  • processing logic causes the image to be normalized.
  • blocks 710 A and 710 B processing logic causes the image to be segmented and the macro features in each segment to be masked as part of a first input variant 710 .
  • blocks 711 A and 711 B processing logic causes the macro features in the normalized image to be masked and then segmented as part of a second input variant 711 .
  • processing logic causes the system to identify regions of interest for analysis by the CNN, and then segment the image as part of a third input variant 712 .
  • processing logic causes a portion of each region of interest to be extracted and scaled to a suitable size.
  • the systems and methods herein use a trained convolutional neural network, which functions as an in silico skin model, to provide an apparent skin age to a user by analyzing an image of the skin of a person (e.g., facial skin).
  • the CNN comprises multiple layers of neuron collections that use the same filters for each pixel in a layer. Using the same filters for each pixel in the various combinations of partially and fully connected layers reduces memory and processing requirements of the system.
  • the CNN comprises multiple deep networks, which are trained and function as discrete convolutional neural networks for a particular image segment and/or region of interest.
  • FIG. 8 illustrates an example of a CNN 800 configuration for use herein.
  • the CNN 800 includes four individual deep networks for analyzing individual regions of interest or portions thereof, which in this example are portions of the forehead, under-eye area, cheeks/nasolabial folds, and chin regions of interest.
  • the CNN may include fewer deep networks or more deep networks, as desired.
  • the image analysis results from each deep network may be used to provide an apparent skin age for its respective region of interest and/or may be concatenated to provide an overall apparent skin age.
  • the CNN herein may be trained using a deep learning technique that allows the CNN to learn what portions of an image contribute to skin age, much in the same way as a mammalian visual cortex learns to recognize important features in an image.
  • the CNN may be trained to determine locations, colors, and/or shade (e.g., lightness or darkness) of pixels that contribute to the skin age of a person.
  • the CNN training may involve using mini-batch stochastic gradient descent (SGD) with Nesterov momentum (and/or other algorithms).
  • SGD stochastic gradient descent
  • Nesterov momentum and/or other algorithms
  • the CNN may be trained by providing an untrained CNN with a multitude of captured images to learn from.
  • the CNN can learn to identify portions of skin in an image that contribute to skin age through a process called supervised learning.
  • Supervised learning generally means that the CNN is trained by analyzing images in which the age of the person in the image is predetermined.
  • the number of training images may vary from a few images to a multitude of images (e.g., hundreds or even thousands) to a continuous input of images (i.e., to provide continuous training).
  • the systems and methods herein utilize a trained CNN that is capable of accurately predicting the apparent age of a user for a wide range of skin types.
  • an image of a region of interest e.g., obtained from an image of a person's face
  • the CNN analyzes the image or image portion and identifies skin micro features in the image that contribute to the predicted age of the user (“trouble spots”).
  • the CNN uses the trouble spots to provide an apparent skin age for the region of interest and/or an overall apparent skin age.
  • an image inputted to the CNN may not be suitable for analysis, for example, due to occlusion (e.g., hair covering a portion of the image, shadowing of a region of interest).
  • the CNN or other logic may discard the image prior to analysis by the CNN or discard the results of the CNN analysis prior to generation of an apparent age.
  • FIG. 9 depicts an example of a convolutional neural network 900 for use in the present system.
  • the CNN 900 may include an inputted image 905 (e.g., region of interest or portion thereof), one or more convolution layers C 1 , C 2 , one or more subsampling layers S 1 and S 2 , one or more partially connected layers, one or more fully connected layers, and an output.
  • an image 905 is inputted into the CNN 900 (e.g., the image of a user).
  • the CNN may sample one or more portions of the image to create one or more feature maps in a first convolution layer C 1 . For example, as illustrated in FIG.
  • the CNN may sample six portions of the image 905 to create six features maps in the first convolution layer C 1 .
  • the CNN may subsample one or more portions of the feature map(s) in the first convolution layer C 1 to create a first subsampling layer S 1 .
  • the subsampled portion of the feature map may be half the area of the feature map. For example, if a feature map comprises a sample area of 29 ⁇ 29 pixels from the image 905 , the subsampled area may be 14 ⁇ 14 pixels.
  • the CNN 900 may perform one or more additional levels of sampling and subsampling to provide a second convolution layer C 2 and a second subsampling layer S 2 .
  • the CNN 900 may include any number of convolution layers and subsampling layers as desired.
  • the CNN 900 Upon completion of final subsampling layer (e.g., layer S 2 in FIG. 9 ), the CNN 900 generates a fully connected layer F 1 , in which every neuron is connected to every other neuron. From the fully connected layer F 1 , the CNN can generate an output such as a predicted age or a heat map.
  • the present system may determine a target skin age (e.g., the apparent age of the person minus a predetermined number of years (e.g., 10, 9, 8, 7, 6, 5, 4, 3, 2, or 1 year)) or the actual age of the person.
  • the system may cause the target age to be propagated back to the original image as a gradient.
  • the absolute value of a plurality of channels of the gradient may then be summed for at least one pixel and scaled from 0-1 for visualization purposes.
  • the value of the scaled pixels may represent pixels that contribute most (and least) to the determination of the skin age of the user.
  • Each scaling value (or range of values) may be assigned a color or shade, such that a virtual mask can be generated to graphically represent the scaled values of the pixels.
  • the CNN analysis optionally in conjunction with habits and practices input provided by a user, can be used to help provide a skin care product and/or regimen recommendation.
  • FIG. 10 depicts an exemplary user interface 1030 for capturing an image of a user and for providing customized product recommendations.
  • the mobile computing device 1002 may provide an application for capturing an image of a user.
  • FIG. 10 depicts an introductory page on the mobile computing device 1002 for beginning the process of capturing an image and providing customized product recommendations.
  • the user interface 1030 also includes a start option 1032 for beginning the process.
  • FIG. 11 depicts an exemplary user interface 1130 illustrating an image that is analyzed for providing an apparent skin age and/or customized product recommendations to a user of the present system.
  • the user interface 1130 may be provided.
  • the image capture device 103 may be utilized for capturing an image of the user.
  • the user may utilize a previously captured image. Regardless, upon capturing the image, the image may be provided in the user interface 1130 . If the user does not wish the image be utilized, the user may retake the image. If the user approves the image, the user may select the next option 1132 to begin analyzing the image and proceeding to the next user interface.
  • FIG. 12 depicts an exemplary user interface 1230 for providing a questionnaire to a user to help customize product recommendations.
  • the user interface 1230 may provide one or more questions for determining additional details regarding the user, including product preferences, current regimens, etc.
  • the questions may include whether the user utilizes a moisturizer with sunscreen.
  • One or more predefined answers 1232 may be provided for the user to select from.
  • FIG. 13 depicts an exemplary user interface 1330 for providing additional prompts for a questionnaire.
  • the user interface 1330 may be provided.
  • the user interface 1330 provides another question (such as whether the user prefers scented skin care products) along with three predefined answers 1332 for the user to select from.
  • a submit option 1334 may also be provided for submitting the selected answer(s).
  • FIGS. 13 and 14 provide two questions, any number of questions may be provided to the user, depending on the particular embodiment. The questions and number of questions may depend on the user's actual age, which may by inputted in one or more of the steps exemplified herein, on the user's skin age, and/or other factors.
  • FIG. 14 depicts an exemplary user interface 1430 for providing a skin age of a user, based on a captured image.
  • the user interface 1430 may be provided.
  • the user interface 1430 may provide the user's skin age and the captured image with at least one identifier 1432 to indicate which region(s) of region of interest(s) is/are contributing to the apparent skin age provided by the CNN.
  • the system may also provide a list 1434 of the regions of interest that contribute to the apparent skin age provided by the CNN.
  • a description 1436 may also be provided, as well as a product-recommendation option 1438 for viewing customized product recommendations.
  • FIG. 15 illustrates another exemplary user interface 1530 for displaying the results of the image analysis.
  • the user interface 1530 may include a results section 1532 that indicates whether the analysis was successful or if there was a problem encountered during the process (e.g., the image quality was poor).
  • the user interface 1530 may include a product-recommendation option (not shown). Additionally or alternatively, the results section 1532 may display an overall apparent skin age to the user and/or an apparent skin age for each region of interest.
  • the user interface 1530 may present the user with an age-input option 1536 .
  • An additional-predictions option 1538 may also be provided.
  • FIG. 16 depicts an exemplary user interface 1630 for providing product recommendations.
  • the user interface 1630 may be provided.
  • the user interface 1630 may provide one or more recommended products that were determined based on the user's age, regions of interest contributing to the user's apparent skin age, and/or the target age (e.g., the apparent skin age and/or actual user's age minus a predetermined number of years).
  • the at least one product may be determined as being applicable to skin disposed in the region of interest that contributes most to the apparent skin age of the user.
  • creams, moisturizers, lotions, sunscreens, cleansers and the like may be recommended.
  • a regimen option 1632 for providing a recommended regimen.
  • a purchase option 1634 may also be provided.
  • FIG. 17 depicts an exemplary user interface 1730 for providing details of product recommendations.
  • the user interface 1730 may be provided.
  • the user interface 1730 may provide a products option 1732 and a schedule option 1734 for using the recommended product in the user's beauty regimen. Additional information related to the first stage of the beauty regimen may be provided in section 1736 . Similarly, data related to a second and/or subsequent stage of the regimen may be provided in the section 1738 .
  • FIG. 18 depicts an exemplary user interface 1830 that provides a recommended beauty regimen.
  • the user interface 1830 may be provided.
  • the user interface 1830 may provide a listing of recommended products, as well as a schedule, including schedule details for the regimen. Specifically, the user interface 1830 may provide a time of day that products may be provided.
  • a details option 1834 may provide the user with additional details regarding products and the regimen.
  • FIG. 19 depicts an exemplary user interface 1930 for providing additional details associated with a beauty regimen and the products used therein.
  • the user interface 1930 may be provided in response to selection of the details option 1834 from FIG. 18 .
  • the user interface 1930 may provide details regarding products, application tips, etc.
  • a “science-behind” option 1932 , 1936 and a “how-to-demo” option 1934 , 1938 may be provided.
  • details regarding the recommended product and the application regimen may be provided.
  • audio and/or video may be provided for instructing the user on a strategy for applying the product.
  • the subsequent portions of the regimen (such as step 2 depicted in FIG. 19 ) may also include a science behind option 1932 , 1936 and a how to demo option 1934 , 1938 .
  • FIG. 20 depicts an exemplary user interface 2030 for providing recommendations related to a determined regimen.
  • the user interface 2030 may be provided.
  • the user interface 2030 includes purchasing options 2032 , 2034 , 2036 for purchasing one or more recommended products.
  • the user interface 2030 may also provide an add-to-cart option 2038 and a shop-more option 2040 .
  • FIG. 21 depicts an exemplary user interface 2130 for providing product recommendations to a user timeline.
  • the user interface 2130 may provide a notification that one or more of the recommended products have been added to the user's timeline.
  • the purchased products may be added to the recommended regimen for the user.
  • the notification may include an acceptance option 2132 and a view timeline option 2134 .
  • FIG. 22 depicts components of a remote computing device 2204 for providing customized skin care product and/or regimen recommendations, according to embodiments described herein.
  • the remote computing device 2204 includes a processor 2230 , input/output hardware 2232 , network interface hardware 2234 , a data storage component 2236 (which stores image data 2238 a , product data 2238 b , and/or other data), and the memory component 2240 b .
  • the memory component 2240 b may be configured as volatile and/or nonvolatile memory and as such, may include random access memory (including SRAM, DRAM, and/or other types of RAM), flash memory, secure digital (SD) memory, registers, compact discs (CD), digital versatile discs (DVD), and/or other types of non-transitory computer-readable mediums. Depending on the particular embodiment, these non-transitory computer-readable mediums may reside within the remote computing device 2204 and/or external to the remote computing device 2204 .
  • random access memory including SRAM, DRAM, and/or other types of RAM
  • SD secure digital
  • CD compact discs
  • DVD digital versatile discs
  • these non-transitory computer-readable mediums may reside within the remote computing device 2204 and/or external to the remote computing device 2204 .
  • the memory component 2240 b may store operating logic 2242 , processing logic 2244 b , training logic 2244 c , and analyzing logic 2244 d .
  • the training logic 2244 c , processing logic 2244 b , and analyzing logic 2244 d may each include a plurality of different pieces of logic, each of which may be embodied as a computer program, firmware, and/or hardware, as an example.
  • a local communications interface 2246 is also included in FIG. 22 and may be implemented as a bus or other communication interface to facilitate communication among the components of the remote computing device 2204 .
  • the processor 2230 may include any processing component operable to receive and execute instructions (such as from a data storage component 2236 and/or the memory component 2240 b ). As described above, the input/output hardware 2232 may include and/or be configured to interface with the components of FIG. 22 .
  • the network interface hardware 2234 may include and/or be configured for communicating with any wired or wireless networking hardware, including an antenna, a modem, a LAN port, wireless fidelity (Wi-Fi) card, WiMax card, BluetoothTM module, mobile communications hardware, and/or other hardware for communicating with other networks and/or devices. From this connection, communication may be facilitated between the remote computing device 2204 and other computing devices, such as those depicted in FIG. 1 .
  • the operating system logic 2242 may include an operating system and/or other software for managing components of the remote computing device 2204 .
  • the training logic 2244 c may reside in the memory component 2240 b and may be configured to cause the processor 2230 to train the convolutional neural network.
  • the processing logic 2244 b may also reside in the memory component 2244 b and be configured to process images prior to analysis by the analyzing logic 2244 d .
  • the analyzing logic 2244 d may be utilized to analyze images for skin age prediction.
  • FIG. 22 It should be understood that while the components in FIG. 22 are illustrated as residing within the remote computing device 2204 , this is merely an example. In some embodiments, one or more of the components may reside external to the remote computing device 2204 and/or the remote computing device 2204 may be configured as a mobile device. It should also be understood that, while the remote computing device 2204 is illustrated as a single device, this is also merely an example. In some embodiments, the training logic 2244 c , the processing logic 2244 b , and/or the analyzing logic 2244 d may reside on different computing devices.
  • one or more of the functionalities and/or components described herein may be provided by the mobile computing device 102 and/or other devices, which may be communicatively coupled to the remote computing device 104 .
  • These computing devices may also include hardware and/or software for performing the functionality described herein.
  • remote computing device 2204 is illustrated with the training logic 2244 c , processing logic 2244 b , and analyzing logic 2244 d as separate logical components, this is also an example. In some embodiments, a single piece of logic may cause the remote computing device 2204 to provide the described functionality.
  • FIG. 23 depicts a flowchart for providing customized product recommendations.
  • an image of a user is captured.
  • the captured image is processed for analysis.
  • questions are provided to the user.
  • answers to the questions are received from the user.
  • the image is analyzed by the CNN.
  • an apparent skin age is provided to the user.
  • an optional skin profile may be generated. The optional skin profile may include, for example, the age of one or more of the regions of interest, a skin condition, or the influence a particular region of interest has on overall skin age.
  • a customized product recommendation is provided to the user.
  • At least some of the images and other data described herein may be stored as historical data for later use.
  • tracking of user progress may be determined based on this historical data.
  • Other analyses may also be performed on this historical data, as desired.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Systems and methods for determining an apparent skin age of a person and providing customized skin care product recommendations. The system utilizes a computing device to mask facial macro features in an image of a person and then analyze the image with a convolutional neural network to determine an apparent skin age of the person. Determining the apparent skin age may include identifying at least one pixel that is indicative of skin age and utilizing the at least one pixel to provide the apparent skin age. The system may be used to determine a target skin age of a person, determine a skin care product or skin care regimen for achieving the target skin age, and provide an option for a user to purchase the product.

Description

FIELD

The present application relates generally to systems and methods for determining the apparent age of a person's skin. More specifically, the present application relates to the use of image processing techniques and one or more convolutional neural networks to more accurately determine the age of a consumer's skin.

BACKGROUND

Skin is the first line of defense against environmental insults that would otherwise damage sensitive underlying tissue and organs. Additionally, skin plays a key role in the physical appearance of a person. Generally, most people desire younger, healthy looking skin. And to some, the tell-tale signs of skin aging such as thinning skin, wrinkles, and age spots are an undesirable reminder of the disappearance of youth. As a result, treating the signs of skin aging has become a booming business in youth-conscious societies. Treatments range from cosmetic creams and moisturizers to various forms of cosmetic surgery.

While a wide variety of cosmetic skin care products are marketed for treating skin conditions, it is not uncommon for a consumer to have difficulty determining which skin care product they should use. For example, someone with skin that appears older than their chronological age may require a different product or regimen compared to someone with more youthful looking skin. Thus, it would be desirable to accurately determine the apparent age of a person's skin.

Numerous attempts have been made to determine a person's apparent skin age by analyzing an image of the person (e.g., a “selfie”) using a computer model/algorithm. The results provided by the computer model can then be used to provide a consumer with a skin profile (e.g., skin age, moisture level, or oiliness) and/or a product recommendation. Past attempts at modeling skin age have relied on facial macro features (eyes, ears, nose, mouth, etc.) as a primary factor driving the computer model/prediction. However, macro-feature based systems may not adequately utilize other skin appearance cues (e.g., micro features such as fine lines, wrinkles, and pigmentation conditions) that drive age perception for a consumer, which can lead to a poor prediction of apparent skin age.

Other past attempts to model skin age and/or skin conditions utilized cumbersome equipment or techniques (e.g., stationary cameras, microscopes, cross-polarized light, specular reflectance, and/or spatial frequency analysis). Thus, it would be desirable to provide consumers with a convenient to use and/or mobile system that analyzes skin such that the consumer can receive product and/or skin care regimen recommendations.

Accordingly, there is still a need for an improved method of conveniently determining the apparent age of a person's skin, which can then be used to help provide a customized skin care product or regimen recommendation.

SUMMARY

Disclosed herein are systems and methods for determining an apparent skin age of a person and providing customized skin care product recommendations to a user. The systems and methods utilize a computing device to process an image of a person, which depicts the person's face, and then analyze the processed image. During processing, the face of the person is identified in the image and facial macro features are masked. The processed image is analyzed. Determining the apparent skin age may include identifying at least one pixel that is indicative of skin age and utilizing the at least one pixel to provide the apparent skin age. Based on the analysis by the CNN and, optionally, other data provided by a user, the system can determine an apparent skin age of a person and/or provide a skin care product or skin care regimen for the person.

BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1

depicts an example of the present system.

FIG. 2

depicts macro features identified in an image of a person.

FIG. 3A

depicts a segmented image.

FIG. 3B

depicts a bounded image.

FIGS. 4A to 4G

depict masked macro features.

FIGS. 5A to 5G

depict masked macro features.

FIGS. 6A to 6G

depict regions of interest.

FIG. 7

is a flow diagram of a method of processing an image.

FIGS. 8 and 9

depict a convolutional neural network for determining apparent skin age.

FIGS. 10 to 21

depict exemplary user interfaces.

FIG. 22

illustrates a remote computing device for providing skin care product and/or regimen recommendations.

FIG. 23

is a flow diagram of a method of providing a product recommendation to a user.

DETAILED DESCRIPTION

A variety of systems and methods have been used in the cosmetics industry to provide customized product recommendations to consumers. For example, some well-known systems use a macro feature-based analysis in which one or more macro features commonly visible in a photograph of a person's face (e.g., eyes, ears, nose, mouth, and/or hair) are detected in a captured image such as a digital photograph or “selfie” and compared to a predefined definition. However, macro-feature based analysis systems may not provide a suitably accurate indication of apparent skin age. Conventional micro feature based systems can employ cumbersome equipment or techniques, which may not be suitable for use by the average consumer.

It has now been discovered that masking facial macro-features and analyzing facial micro-features with a convolutional neural network (“CNN”) can provide a suitably accurate determination of a person's apparent skin age. The CNN based image analysis system can be configured to use relatively little image pre-processing, which reduces the dependence of the system on prior knowledge and predetermined definitions and reduces the computer memory and/or processing power needed to analyze an image. Consequently, the present system demonstrates improved generalization compared to a conventional macro-feature-based image analysis systems, which may lead to a better skin care product or regimen recommendations for a consumer who uses the system.

Definitions

“About,” as used herein, modifies a particular value, by referring to a range equal to the particular value, plus or minus twenty percent (+/−20%) or less (e.g., less than 15%, 10%, or even less than 5%).

“Apparent skin age” means the age of a person's skin calculated by the system herein, based on a captured image.

“Convolutional neural network” is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.

“Coupled,” when referring to various components of the system herein, means that the components are in electrical, electronic, and/or mechanical communication with one another.

“Disposed” means an element is positioned in a particular place relative to another element.

“Image capture device” means a device such as a digital camera capable of capturing an image of a person.

“Joined” means configurations whereby an element is directly secured to another element by affixing the element directly to the other element, and configurations whereby an element is indirectly secured to another element by affixing the element to intermediate member(s) that in turn are affixed to the other element.

“Macro features” are relatively large bodily features found on or near the face of a human. Macro features include, without limitation, face shape, ears, eyes, mouth, nose, hair, and eyebrows.

“Masking” refers the process of digitally replacing at least some of the pixels disposed in and/or proximate to a macro feature in an image with pixels that have an RGB value closer to or the same as pixels disposed in a region of interest.

“Micro features” are relatively small features commonly associated with aging skin and/or skin disorders found on the face of a human. Micro features include, without limitation, fine line, wrinkles, dry skin features (e.g., skin flakes), and pigmentation disorders (e.g., hyperpigmentation conditions). Micro features do not include macro features.

“Person” means a human being.

“Region of interest” or “RoI” means a specifically bounded portion of skin in an image or image segment where analysis by a CNN is desired to provide an apparent skin age. Some nonlimiting examples of a region of interest include a portion of an image depicting the forehead, cheek, nasolabial fold, under-eye area, or chin in which the macro features have been masked.

“Segmenting” refers to dividing an image into two or more discrete zones for analysis.

“Target skin age” means a skin age that is a predetermined number of years different from the apparent skin age.

“User” herein refers to a person who uses at least the features provided herein, including, for example, a device user, a product user, a system user, and the like.

The systems and methods herein utilize a multi-step (e.g., 2, 3, 4, or more steps) approach to determine the apparent skin age of a person from an image of that person. By using a multi-step process instead of a single-step process, in which the CNN processes and analyzes the image or analyzes a full-face image, the CNN can focus on the important features that drive age perception (e.g., micro features) and reduce the computing power needed to analyze the image and reduce the bias that may be introduced to the system by macro features.

In a first step, processing logic stored in a memory component of the system causes the system to perform one or more (e.g., all) of the following: identify a face in the image for analysis, normalize the image, mask one or more (e.g., all) facial macro-features on the identified face, and segment the image for analysis. The processing steps may be performed in any order, as desired. The processed image is provided to a convolutional neural network as one or more input variants for analysis. The results of the CNN analysis are used to provide an apparent skin age of each segment and/or an overall skin age for the entire face.

FIG. 1

depicts an

exemplary system

10 for capturing an image of a person, analyzing the image, determining the skin age of the person, and, optionally, providing a customized skin care regimen and/or product recommendation to a user. The

system

10 may include a network 100 (e.g., a wide area network such as a mobile telephone network, a public switched telephone network, a satellite network, and/or the internet; a local area network such as wireless-fidelity, Wi-Max, ZigBee™, and/or Bluetooth™; and/or other suitable forms of networking capabilities). Coupled to the

network

100 are a

mobile computing device

102, a

remote computing device

104, and a

training computing device

108.

The

mobile computing device

102 may be a mobile telephone, a tablet, a laptop, a personal digital assistant and/or other computing device configured for capturing, storing, and/or transferring an image such as a digital photograph. Accordingly, the

mobile computing device

102 may include an

image capture device

103 such as a digital camera and/or may be configured to receive images from other devices. The

mobile computing device

102 may include a

memory component

140 a, which stores

image capture logic

144 a and

interface logic

144 b. The

memory component

140 a may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware. The

image capture logic

144 a and the

interface logic

144 b may include software components, hardware circuitry, firmware, and/or other computing infrastructure. The

image capture logic

144 a may facilitate capturing, storing, preprocessing, analyzing, transferring, and/or performing other functions on a digital image of a user. The

interface logic

144 b may be configured for providing one or more user interfaces to the user, which may include questions, options, and the like. The

mobile computing device

102 may also be configured for communicating with other computing devices via the

network

100.

The

remote computing device

104 may also be coupled to the

network

100 and may be configured as a server (or plurality of servers), personal computer, mobile computer, and/or other computing device configured for creating, storing, and/or training a convolutional neural network capable of determining the skin age of a user by locating and analyzing skin features that contribute to skin age in a captured image of the user's face. For example, the CNN may be stored as

logic

144 c and 144 d in the

memory component

140 b of a

remote computing device

104. Commonly perceived skin flaws such as fine lines, wrinkles, dark (age) spots, uneven skin tone, blotchiness, enlarged pores, redness, yellowness, combinations of these and the like may all be identified by the trained CNN as contributing to the skin age of the user.

The

remote computing device

104 may include a

memory component

140 b that stores

training logic

144 c, analyzing

logic

144 d, and/or

processing logic

144 e. The

memory component

140 b may include random access memory (such as SRAM, DRAM, etc.), read only memory (ROM), registers, and/or other forms of computing storage hardware. The

training logic

144 c, analyzing

logic

144 d, and/or

processing logic

144 e may include software components, hardware circuitry, firmware, and/or other computing infrastructure.

Training logic

144 c facilitates creation and/or training of the CNN, and thus may facilitate creation of and/or operation of the CNN.

Processing logic

144 e causes the image received from the mobile computing device 102 (or other computing device) to be processed for analysis by the analyzing

logic

144 d. Image processing may include macro feature identification, masking, segmentation, and/or other image alteration processes, which are described in more detail below. Analyzing

logic

144 d causes the

remote computing device

104 to analyze the processed image to provide an apparent skin age, product recommendation, etc.

In some instances, a

training computing device

108 may be coupled to the

network

100 to facilitate training of the CNN. For example, a trainer may provide one or more digital images of a face or skin to the CNN via the

training computing device

108. The trainer may also provide information and other instructions (e.g., actual age) to inform the CNN which assessments are correct and which assessments are not correct. Based on the input from the trainer, the CNN may automatically adapt, as described in more detail below.

The

system

10 may also include a

kiosk computing device

106, which may operate similar to the

mobile computing device

102, but may also be able to dispense one or more products and/or receive payment in the form of cash or electronic transactions. Of course, it is to be appreciated that a

mobile computing device

102, which also provides payment and/or production dispensing, is contemplated herein. In some instances, the

kiosk computing device

106 and/or

mobile computing device

102 may also be configured to facilitate training of the CNN. Thus, the hardware and software depicted and/or described for the

mobile computing device

102 and the

remote computing device

104 may be included in the

kiosk computing device

106, the

training computing device

108, and/or other devices. Similarly, the hardware and software depicted and/or described for the remote computing device 2104 in

FIG. 21

may be included in one or more of the

mobile computing device

102, the

remote computing device

104, the

kiosk computing device

106, and the

training computing device

108.

It should also be understood that while the

remote computing device

104 is depicted in

FIG. 1

as performing the image processing and image analysis, this is merely an example. The image processing and/or image analysis may be performed by any suitable computing device, as desired.

Image Processing

In a first step of the image analysis process herein, the present system receives an image containing at least one face of person and prepares the image for analysis by the CNN. The image may be received from any suitable source, such as, for example, a smartphone comprising a digital camera. It may be desirable to use a camera capable of producing at least a one megapixel image and electronically transferring the image to a computing device(s) that can access suitable image processing logic and/or image analyzing logic.

Once the image is received, the processing logic identifies the portion(s) of the image that contain a human face. The processing logic can be configured to detect the human face(s) present in the image using any suitable technique known in the art, such as, for example, color and/or color contrast techniques, removal of monochrome background features, edge-based techniques that use geometric models or Hausdorff distance, weak cascade techniques, or a combination of these. In some instances, it may be particularly desirable to use a Viola-Jones type of weak cascade technique, which was described by Paul Viola and Michael Jones in “International Journal of Computer Vision” 57(2), 137-154, 2004.

In some instances, an image received by the present system may contain more than one face, but a user may not want to analyze all of the faces in the image. For example, the user may only want to analyze the face of the person seeking advice related to a skin care treatment and/or product. Thus, the present system may be configured to select only the desired image(s) for analysis. For example, the processing logic may select the dominant face for analysis based on the relative position of the face in the image (e.g., center), the relative size of face (e.g., largest “rectangle”), or a combination of these. Alternatively or additionally, the present system may query the user to confirm that the face selected by the processing logic is correct and/or ask the user to select one or more faces for analysis. Any suitable user interface technique known in the art may be used to query a user and/or enable the user to select one or more faces present in the image.

Once the appropriate face(s) is selected for further processing, the processing logic detects one or more facial landmarks (e.g., eyes, nose, mouth, or portions thereof), which may be used as anchor features (i.e., reference points that the processing logic can use to normalize and/or segment the image). In some instances, the processing logic may create a bounding box that isolates the face from the rest of the image. In this way, background objects, undesirable macro features, and/or other body parts that are visible in the image can be removed. The facial landmarks of interest may be detected using a known landmark detection technique (e.g., Viola-Jones or a facial shape/size recognition algorithm).

FIG. 2

illustrates an example of a landmark detection technique in which the

eyes

202,

nose

204, and corners of the

mouth

206 are identified by the processing logic for use as anchor features. In this example, normalizing the image may include rotating the image and/or scaling the size of the image to reduce variability between images.

FIG. 3A

illustrates an example of segmenting an

image

300 into discrete zones for subsequent processing and/or analysis. In some instances, the

segmented image

300 may be presented to a user via the mobile computing device, as illustrated in

FIG. 3A

. However, in some instances, the segmented image may just be part of the image processing and not displayed to a user. As illustrated in

FIG. 3A

, the image is separated into 6 segments that include a

forehead segment

301, a left and a

right eye segment

302 and 303, a left and a right cheek/

nasolabial fold segment

304 and 305, and a

chin segment

306. In some instances, the image may be segmented and/or two or more segments combined to reflect zones that are commonly used to analyze skin in the cosmetics industry, such as, for example, the so-called T-zone or U-zone. The T-zone is generally recognized in the cosmetics industry as the portion of the face that extends laterally across the forehead and longitudinally from about the middle of the forehead to the end of the nose or to the bottom of the chin. The T-zone is so named because it resembles an upper-case letter T. The U-zone is generally recognized as the portion of the face that extends longitudinally down one cheek, laterally across the chin, and then back up (longitudinally) to the other cheek. The U-zone is so named because it resembles the letter U.

Facial segmentation may be performed, for example, by a tasks constrained deep convolutional network (TCDCN) or other suitable technique, as known to those skilled in the art. Segmenting the facial image allows the analyzing logic to provide an apparent age for each segment, which can be important because some segments are known to impact overall skin age perception more than other segments. Thus, each segment may be weighted to reflect the influence that segment has on the perception of skin age. In some instances, the processing logic may cause the system to scale the segmented image such that the full height of the facial image (i.e., distance from the bottom of the chin to the top of the forehead) does not exceed a particular value (e.g., between 700 and 800 pixels, between 700 and 750 pixels, or even about 716 pixels).

FIG. 3B

illustrates an example of bounding an

image

310 in a

bounding box

320. The

bounding box

320 may extend longitudinally from the bottom of the chin to the top of the forehead, and laterally from one temple to the other temple. The

bounding box

320 may be sized to remove background objects, macro features or a portion thereof (e.g., hair, ears), and/or all or a portion of other bodily objects that may be present in the image (e.g., neck, chest, shoulders, arms, or forehead). Of course, bounding boxes of all sizes are contemplated herein. Bounding may occur before, after, or at the same time as image segmentation. In some instances, the

bounding box

320 and/or

bounded image

310 may be presented to a user via the mobile computing device, as illustrated in

FIG. 3B

, but need not necessarily be so.

It is important to prevent facial macro features from contaminating the skin age analysis by the CNN. If the facial macro features are not masked, the CNN may learn to predict the skin age of a person from macro feature cues rather than micro features cues such as fine lines and wrinkles, which are known to be much more influential on how people perceive skin age. This can be demonstrated by digitally altering an image to remove facial micro features such as fine lines, wrinkles, and pigmentation disorders, and observing that the apparent age provided by the system does not change. Masking may occur before and/or after the image is segmented and/or bounded. In the present system, masking may be accomplished by replacing the pixels in a facial macro feature with pixels that have a uniform, non-zero (i.e., black), non-255 (i.e., white) RGB value. For example, it may be desirable to replace the pixels in the macro feature with pixels that have a median RGB value of the skin in the region of interest. It is believed, without being limited by theory, that by masking the facial macro features with uniformly colored pixels or otherwise nondescript pixels, the CNN will learn to predict age using features other than the macro features (e.g., facial micro features such fine lines and wrinkles). Masking herein may be accomplished using any suitable masking means known in the art, such as, for example, Matlab® brand computer software.

Even when masking the facial macro features as described above, a sophisticated convolutional neural network may still learn to predict skin age based on “phantom” macro features. In other words, the neural network may still learn to recognize differences in the patterns of median RGB pixels because the patterns generally correspond to the size and/or position of the masked facial macro feature. The CNN may then apply the pattern differences to its age prediction analysis. To avoid this problem, it is important to use more than one input variant (e.g., 2, 3, 4, 5, or 6, or more) of the processed image to the CNN. By varying how the masked macro features are presented to the CNN, it is believed, without being limited by theory, that the CNN is less likely to learn to use differences in the median RGB pixel patterns to predict skin age.

FIGS. 4A to 4G

illustrate an example of a first input variant in which an image of the face is segmented into six discrete zones and then the macro features in each segment are masked to provide the desired region of interest. In this example, the processing logic causes all pixels associated with a macro feature (e.g., eye, nose, mouth, ear, eyebrow, hair) in the image segment to be filled in with the median RGB color space value of all pixels located in the relevant region of interest (e.g., a portion of the image that does not include a macro feature).

FIG. 4B

illustrates how the eyes and/or eyebrows are masked in a forehead segment to provide a forehead RoI.

FIGS. 4C and 4D

illustrate how the eyes, eyebrows, nose, cheeks, and hair are masked in an under-eye segment to provide an under-eye RoI.

FIGS. 4E and 4F

illustrate how the nose, mouth, and hair are masked in a cheek/nasolabial fold segment to provide a cheek/nasolabial fold RoI.

FIG. 4G

illustrates how the mouth is masked in a chin segment to provide a chin RoI.

FIG. 4A

illustrates how the masked features would appear on an image of the entire face when the individually masked segments are combined and the background features are removed by a bounding box.

FIGS. 5A to 5G

illustrate an example of a second input variant. In this example, the processing logic causes all pixels associated with a macro feature in the unsegmented image of the face (“full-face”) to be filled in with the median RGB value of all pixels disposed in a region of interest. The processing logic may then cause the masked, full-face image to be segmented, e.g., as described above.

FIG. 5A

illustrates a full-face image in which certain macro features are masked and the background features are removed by a bounding box.

FIGS. 5B to 5G

illustrate how each region of interest appears when the masked, full-face image of

FIG. 5A

is segmented. When

FIGS. 4A to 4G

are compared to their counterparts in

FIGS. 5A to 5G

, both the full-face images and the individual regions of interest differ somewhat from one another.

FIGS. 6A to 6G

illustrate an example of a third input variant. In this example, the processing logic causes the system to identify regions of interest in the full-face image, and then segment the image into six discrete zones comprising the regions of interest.

FIG. 6A

illustrates a full-face image in which the nose is used as an anchor feature and the six image segments are identified.

FIGS. 6B-6G

illustrate a region of interest extracted from each image segment.

FIG. 6B

depicts a forehead RoI;

FIGS. 6C and 6D

each depict an under-eye RoI;

FIGS. 6E and 6F

each depict a cheek/nasolabial fold RoI; and

FIG. 6G

depicts a chin RoI.

In some instances, it may be desirable to select only a portion of a particular region of interest for analysis by the CNN. For example, it may be desirable to select a patch of skin disposed in and/or around the center of the region of interest, and scale the selected skin patch to a uniform size. Continuing with this example, the largest rectangle of skin-only area may be extracted from the center of each region of interest and rescaled to a 256 pixel×256 pixel skin patch.

FIG. 7

illustrates the image

processing flow path

700 for the methods and systems herein. In

block

702, an image is received by the system. In

block

704, processing logic causes one or more faces in the received image to be detected or selected for further processing. In

block

706, processing logic causes landmark features to be detected in the detected or selected face. In

block

708, processing logic causes the image to be normalized. In

blocks

710A and 710B, processing logic causes the image to be segmented and the macro features in each segment to be masked as part of a

first input variant

710. In

blocks

711A and 711B, processing logic causes the macro features in the normalized image to be masked and then segmented as part of a

second input variant

711. In

blocks

712A and 712B, processing logic causes the system to identify regions of interest for analysis by the CNN, and then segment the image as part of a

third input variant

712. In

block

714, processing logic causes a portion of each region of interest to be extracted and scaled to a suitable size.

Convolutional Neural Network

The systems and methods herein use a trained convolutional neural network, which functions as an in silico skin model, to provide an apparent skin age to a user by analyzing an image of the skin of a person (e.g., facial skin). The CNN comprises multiple layers of neuron collections that use the same filters for each pixel in a layer. Using the same filters for each pixel in the various combinations of partially and fully connected layers reduces memory and processing requirements of the system. In some instances, the CNN comprises multiple deep networks, which are trained and function as discrete convolutional neural networks for a particular image segment and/or region of interest.

FIG. 8

illustrates an example of a

CNN

800 configuration for use herein. As illustrated in

FIG. 8

, the

CNN

800 includes four individual deep networks for analyzing individual regions of interest or portions thereof, which in this example are portions of the forehead, under-eye area, cheeks/nasolabial folds, and chin regions of interest. Of course, it is to be appreciated that the CNN may include fewer deep networks or more deep networks, as desired. The image analysis results from each deep network may be used to provide an apparent skin age for its respective region of interest and/or may be concatenated to provide an overall apparent skin age.

The CNN herein may be trained using a deep learning technique that allows the CNN to learn what portions of an image contribute to skin age, much in the same way as a mammalian visual cortex learns to recognize important features in an image. For example, the CNN may be trained to determine locations, colors, and/or shade (e.g., lightness or darkness) of pixels that contribute to the skin age of a person. In some instances, the CNN training may involve using mini-batch stochastic gradient descent (SGD) with Nesterov momentum (and/or other algorithms). An example of utilizing a stochastic gradient descent is disclosed in U.S. Pat. No. 8,582,807.

In some instances, the CNN may be trained by providing an untrained CNN with a multitude of captured images to learn from. In some instances, the CNN can learn to identify portions of skin in an image that contribute to skin age through a process called supervised learning. “Supervised learning” generally means that the CNN is trained by analyzing images in which the age of the person in the image is predetermined. Depending on the accuracy desired, the number of training images may vary from a few images to a multitude of images (e.g., hundreds or even thousands) to a continuous input of images (i.e., to provide continuous training).

The systems and methods herein utilize a trained CNN that is capable of accurately predicting the apparent age of a user for a wide range of skin types. To generate an apparent age, an image of a region of interest (e.g., obtained from an image of a person's face) or portion thereof is forward-propagating through the trained CNN. The CNN analyzes the image or image portion and identifies skin micro features in the image that contribute to the predicted age of the user (“trouble spots”). The CNN then uses the trouble spots to provide an apparent skin age for the region of interest and/or an overall apparent skin age.

In some instances, an image inputted to the CNN may not be suitable for analysis, for example, due to occlusion (e.g., hair covering a portion of the image, shadowing of a region of interest). In these instances, the CNN or other logic may discard the image prior to analysis by the CNN or discard the results of the CNN analysis prior to generation of an apparent age.

FIG. 9

depicts an example of a convolutional

neural network

900 for use in the present system. The

CNN

900 may include an inputted image 905 (e.g., region of interest or portion thereof), one or more convolution layers C1, C2, one or more subsampling layers S1 and S2, one or more partially connected layers, one or more fully connected layers, and an output. To begin an analysis or to train the CNN, an

image

905 is inputted into the CNN 900 (e.g., the image of a user). The CNN may sample one or more portions of the image to create one or more feature maps in a first convolution layer C1. For example, as illustrated in

FIG. 9

, the CNN may sample six portions of the

image

905 to create six features maps in the first convolution layer C1. Next, the CNN may subsample one or more portions of the feature map(s) in the first convolution layer C1 to create a first subsampling layer S1. In some instances, the subsampled portion of the feature map may be half the area of the feature map. For example, if a feature map comprises a sample area of 29×29 pixels from the

image

905, the subsampled area may be 14×14 pixels. The

CNN

900 may perform one or more additional levels of sampling and subsampling to provide a second convolution layer C2 and a second subsampling layer S2. It is to be appreciated that the

CNN

900 may include any number of convolution layers and subsampling layers as desired. Upon completion of final subsampling layer (e.g., layer S2 in

FIG. 9

), the

CNN

900 generates a fully connected layer F1, in which every neuron is connected to every other neuron. From the fully connected layer F1, the CNN can generate an output such as a predicted age or a heat map.

In some instances, the present system may determine a target skin age (e.g., the apparent age of the person minus a predetermined number of years (e.g., 10, 9, 8, 7, 6, 5, 4, 3, 2, or 1 year)) or the actual age of the person. The system may cause the target age to be propagated back to the original image as a gradient. The absolute value of a plurality of channels of the gradient may then be summed for at least one pixel and scaled from 0-1 for visualization purposes. The value of the scaled pixels may represent pixels that contribute most (and least) to the determination of the skin age of the user. Each scaling value (or range of values) may be assigned a color or shade, such that a virtual mask can be generated to graphically represent the scaled values of the pixels. In some instances, the CNN analysis, optionally in conjunction with habits and practices input provided by a user, can be used to help provide a skin care product and/or regimen recommendation.

FIG. 10

depicts an

exemplary user interface

1030 for capturing an image of a user and for providing customized product recommendations. As illustrated, the

mobile computing device

1002 may provide an application for capturing an image of a user. Accordingly,

FIG. 10

depicts an introductory page on the

mobile computing device

1002 for beginning the process of capturing an image and providing customized product recommendations. The

user interface

1030 also includes a

start option

1032 for beginning the process.

FIG. 11

depicts an

exemplary user interface

1130 illustrating an image that is analyzed for providing an apparent skin age and/or customized product recommendations to a user of the present system. In response to selection of the

start option

1032 from

FIG. 10

, the

user interface

1130 may be provided. As illustrated, the

image capture device

103 may be utilized for capturing an image of the user. In some embodiments, the user may utilize a previously captured image. Regardless, upon capturing the image, the image may be provided in the

user interface

1130. If the user does not wish the image be utilized, the user may retake the image. If the user approves the image, the user may select the

next option

1132 to begin analyzing the image and proceeding to the next user interface.

FIG. 12

depicts an

exemplary user interface

1230 for providing a questionnaire to a user to help customize product recommendations. As illustrated, the

user interface

1230 may provide one or more questions for determining additional details regarding the user, including product preferences, current regimens, etc. As an example, the questions may include whether the user utilizes a moisturizer with sunscreen. One or more

predefined answers

1232 may be provided for the user to select from.

FIG. 13

depicts an

exemplary user interface

1330 for providing additional prompts for a questionnaire. In response to entering the requested data from the

user interface

1230 of

FIG. 12

, the

user interface

1330 may be provided. As illustrated, the

user interface

1330 provides another question (such as whether the user prefers scented skin care products) along with three

predefined answers

1332 for the user to select from. A submit option 1334 may also be provided for submitting the selected answer(s). It should be understood that while

FIGS. 13 and 14

provide two questions, any number of questions may be provided to the user, depending on the particular embodiment. The questions and number of questions may depend on the user's actual age, which may by inputted in one or more of the steps exemplified herein, on the user's skin age, and/or other factors.

FIG. 14

depicts an

exemplary user interface

1430 for providing a skin age of a user, based on a captured image. In response to completing the questionnaire of

FIGS. 13 and 14

, the

user interface

1430 may be provided. As illustrated, the

user interface

1430 may provide the user's skin age and the captured image with at least one

identifier

1432 to indicate which region(s) of region of interest(s) is/are contributing to the apparent skin age provided by the CNN. In some instances, the system may also provide a

list

1434 of the regions of interest that contribute to the apparent skin age provided by the CNN. A

description

1436 may also be provided, as well as a product-

recommendation option

1438 for viewing customized product recommendations.

FIG. 15

illustrates another

exemplary user interface

1530 for displaying the results of the image analysis. The

user interface

1530 may include a

results section

1532 that indicates whether the analysis was successful or if there was a problem encountered during the process (e.g., the image quality was poor). The

user interface

1530 may include a product-recommendation option (not shown). Additionally or alternatively, the

results section

1532 may display an overall apparent skin age to the user and/or an apparent skin age for each region of interest. The

user interface

1530 may present the user with an age-

input option

1536. An additional-

predictions option

1538 may also be provided.

FIG. 16

depicts an

exemplary user interface

1630 for providing product recommendations. In response to selection of a product-recommendation option by a user, the

user interface

1630 may be provided. As illustrated, the

user interface

1630 may provide one or more recommended products that were determined based on the user's age, regions of interest contributing to the user's apparent skin age, and/or the target age (e.g., the apparent skin age and/or actual user's age minus a predetermined number of years). Specifically, the at least one product may be determined as being applicable to skin disposed in the region of interest that contributes most to the apparent skin age of the user. As an example, creams, moisturizers, lotions, sunscreens, cleansers and the like may be recommended. Also provided is a

regimen option

1632 for providing a recommended regimen. A

purchase option

1634 may also be provided.

FIG. 17

depicts an

exemplary user interface

1730 for providing details of product recommendations. In response to selection of the

regimen option

1632 from

FIG. 16

, the

user interface

1730 may be provided. As illustrated, the

user interface

1730 may provide a

products option

1732 and a

schedule option

1734 for using the recommended product in the user's beauty regimen. Additional information related to the first stage of the beauty regimen may be provided in

section

1736. Similarly, data related to a second and/or subsequent stage of the regimen may be provided in the

section

1738.

FIG. 18

depicts an

exemplary user interface

1830 that provides a recommended beauty regimen. In response to selection of the

schedule option

1734 from

FIG. 17

, the

user interface

1830 may be provided. The

user interface

1830 may provide a listing of recommended products, as well as a schedule, including schedule details for the regimen. Specifically, the

user interface

1830 may provide a time of day that products may be provided. A

details option

1834 may provide the user with additional details regarding products and the regimen.

FIG. 19

depicts an

exemplary user interface

1930 for providing additional details associated with a beauty regimen and the products used therein. The

user interface

1930 may be provided in response to selection of the

details option

1834 from

FIG. 18

. As illustrated, the

user interface

1930 may provide details regarding products, application tips, etc. In some instances, a “science-behind”

option

1932, 1936 and a “how-to-demo”

option

1934, 1938 may be provided. In response to selection of the science behind

option

1932, 1936, details regarding the recommended product and the application regimen may be provided. In response to selection of the how to demo

option

1934, 1938, audio and/or video may be provided for instructing the user on a strategy for applying the product. Similarly, the subsequent portions of the regimen (such as

step

2 depicted in

FIG. 19

) may also include a science behind

option

1932, 1936 and a how to demo

option

1934, 1938.

FIG. 20

depicts an

exemplary user interface

2030 for providing recommendations related to a determined regimen. In response to selection of the purchase option 1634 (

FIG. 16

), the

user interface

2030 may be provided. As illustrated, the

user interface

2030 includes purchasing

options

2032, 2034, 2036 for purchasing one or more recommended products. The

user interface

2030 may also provide an add-to-

cart option

2038 and a shop-

more option

2040.

FIG. 21

depicts an

exemplary user interface

2130 for providing product recommendations to a user timeline. As illustrated, the

user interface

2130 may provide a notification that one or more of the recommended products have been added to the user's timeline. Upon purchasing a product (e.g., via the

user interface

1930 from

FIG. 19

), the purchased products may be added to the recommended regimen for the user. As such, the notification may include an

acceptance option

2132 and a

view timeline option

2134.

FIG. 22

depicts components of a

remote computing device

2204 for providing customized skin care product and/or regimen recommendations, according to embodiments described herein. The

remote computing device

2204 includes a

processor

2230, input/

output hardware

2232,

network interface hardware

2234, a data storage component 2236 (which stores

image data

2238 a,

product data

2238 b, and/or other data), and the

memory component

2240 b. The

memory component

2240 b may be configured as volatile and/or nonvolatile memory and as such, may include random access memory (including SRAM, DRAM, and/or other types of RAM), flash memory, secure digital (SD) memory, registers, compact discs (CD), digital versatile discs (DVD), and/or other types of non-transitory computer-readable mediums. Depending on the particular embodiment, these non-transitory computer-readable mediums may reside within the

remote computing device

2204 and/or external to the

remote computing device

2204.

The

memory component

2240 b may store

operating logic

2242,

processing logic

2244 b,

training logic

2244 c, and analyzing

logic

2244 d. The

training logic

2244 c,

processing logic

2244 b, and analyzing

logic

2244 d may each include a plurality of different pieces of logic, each of which may be embodied as a computer program, firmware, and/or hardware, as an example. A

local communications interface

2246 is also included in

FIG. 22

and may be implemented as a bus or other communication interface to facilitate communication among the components of the

remote computing device

2204.

The

processor

2230 may include any processing component operable to receive and execute instructions (such as from a data storage component 2236 and/or the

memory component

2240 b). As described above, the input/

output hardware

2232 may include and/or be configured to interface with the components of

FIG. 22

.

The

network interface hardware

2234 may include and/or be configured for communicating with any wired or wireless networking hardware, including an antenna, a modem, a LAN port, wireless fidelity (Wi-Fi) card, WiMax card, Bluetooth™ module, mobile communications hardware, and/or other hardware for communicating with other networks and/or devices. From this connection, communication may be facilitated between the

remote computing device

2204 and other computing devices, such as those depicted in

FIG. 1

.

The

operating system logic

2242 may include an operating system and/or other software for managing components of the

remote computing device

2204. As discussed above, the

training logic

2244 c may reside in the

memory component

2240 b and may be configured to cause the

processor

2230 to train the convolutional neural network. The

processing logic

2244 b may also reside in the

memory component

2244 b and be configured to process images prior to analysis by the analyzing

logic

2244 d. Similarly, the analyzing

logic

2244 d may be utilized to analyze images for skin age prediction.

It should be understood that while the components in

FIG. 22

are illustrated as residing within the

remote computing device

2204, this is merely an example. In some embodiments, one or more of the components may reside external to the

remote computing device

2204 and/or the

remote computing device

2204 may be configured as a mobile device. It should also be understood that, while the

remote computing device

2204 is illustrated as a single device, this is also merely an example. In some embodiments, the

training logic

2244 c, the

processing logic

2244 b, and/or the analyzing

logic

2244 d may reside on different computing devices. As an example, one or more of the functionalities and/or components described herein may be provided by the

mobile computing device

102 and/or other devices, which may be communicatively coupled to the

remote computing device

104. These computing devices may also include hardware and/or software for performing the functionality described herein.

Additionally, while the

remote computing device

2204 is illustrated with the

training logic

2244 c,

processing logic

2244 b, and analyzing

logic

2244 d as separate logical components, this is also an example. In some embodiments, a single piece of logic may cause the

remote computing device

2204 to provide the described functionality.

FIG. 23

depicts a flowchart for providing customized product recommendations. In

block

2350, an image of a user is captured. In

block

2352, the captured image is processed for analysis. In

block

2354, questions are provided to the user. In

block

2356, answers to the questions are received from the user. In

block

2358, the image is analyzed by the CNN. In

block

2360, an apparent skin age is provided to the user. In

block

2361, an optional skin profile may be generated. The optional skin profile may include, for example, the age of one or more of the regions of interest, a skin condition, or the influence a particular region of interest has on overall skin age. In

block

2362, a customized product recommendation is provided to the user.

In some instances, at least some of the images and other data described herein may be stored as historical data for later use. As an example, tracking of user progress may be determined based on this historical data. Other analyses may also be performed on this historical data, as desired.

The dimensions and values disclosed herein are not to be understood as being strictly limited to the exact numerical values recited. Instead, unless otherwise specified, each such dimension is intended to mean both the recited value and a functionally equivalent range surrounding that value. For example, a dimension disclosed as “40 mm” is intended to mean “about 40 mm.”

Every document cited herein, including any cross referenced or related patent or application and any patent application or patent to which this application claims priority or benefit thereof, is hereby incorporated herein by reference in its entirety unless expressly excluded or otherwise limited. The citation of any document is not an admission that it is prior art with respect to any invention disclosed or claimed herein or that it alone, or in any combination with any other reference or references, teaches, suggests or discloses any such invention. Further, to the extent that any meaning or definition of a term in this document conflicts with any meaning or definition of the same term in a document incorporated by reference, the meaning or definition assigned to that term in this document shall govern.

While particular embodiments of the present invention have been illustrated and described, it would be obvious to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.

Claims (18)

What is claimed is:

1. A system for determining an apparent skin age of a person, comprising: a non-transitory computer readable storage medium with logic stored thereon, wherein the logic causes the system to

a) receive a digital image comprising a human face;

b) process the digital image for analysis, wherein processing comprises locating the human face in the digital image, segmenting the digital image into two or more image segments, and masking at least one macro feature present on the face;

c) analyze the processed image using a convolutional neural network (CNN) comprising a discrete deep neural network for each image segment, wherein the analysis includes identifying in a region of interest in each image segment at least one pixel disposed in a facial micro feature that is indicative of the person's skin age;

d) determine with the CNN an overall apparent skin age of the person based on the analysis of each deep neural network; and

e) display the overall apparent skin age on a display device visible to a user.

2. The system of

claim 1

, further comprising an image capture device coupled to a computer, wherein the digital image is captured by the image capture device and received by the computer.

3. The system of

claim 1

, wherein masking the macro feature comprises replacing pixels in the facial macro feature with pixels that have a median RGB value of skin disposed in a region of interest.

4. The system of

claim 1

, wherein a result of each analysis from each deep neural network is used to determine an apparent skin age for that region of interest.

5. The system of

claim 4

, wherein results from all the deep neural network analyses are concatenated to provide an overall apparent skin age.

6. The system of

claim 1

, wherein processing the image further includes providing two or more input variations to the CNN.

7. The system of

claim 6

, wherein the image is segmented and then the macro feature is masked to provide a first input variation.

8. The system of

claim 7

, wherein the macro feature is masked and then the image is segmented to provide a second input variation.

9. A method of determining the apparent skin age of a person, comprising:

a) receiving an image of the person, wherein the image includes at least a portion of the person's face;

b) processing the image with a computer, wherein processing the image includes identifying the portion of the image comprising the face, segmenting the digital image into two or more image segments, and masking a macro feature of the face;

c) analyzing the image using a convolutional neural network comprising a discrete deep neural network for each image segment stored on a memory component of the computer to provide an apparent skin age, wherein analyzing the image includes identifying in a region of interest in each image segment at least one pixel that is indicative of skin age and utilizing the at least one pixel from each image segment to provide an overall apparent skin age; and

d) displaying the overall apparent skin age on a display device visible to a user.

10. The method of

claim 9

, further comprising an image capture device coupled to a computer, wherein the digital image is captured by the image capture device and received by the computer.

11. The method of

claim 9

, wherein masking the macro feature comprises replacing pixels in the facial macro feature with pixels that have a median RGB value of skin disposed in a region of interest.

12. The method of

claim 9

, wherein a result of each analysis from each deep neural network is used to determine an apparent skin age for that region of interest.

13. The method of

claim 12

, further comprising concatenating results from all the deep neural network analyses to provide the overall apparent skin age.

14. The method of

claim 9

, wherein processing the image further includes providing two or more input variations to the CNN.

15. The method of

claim 14

, wherein the image is segmented and then the macro feature is masked to provide a first input variation.

16. The method of

claim 15

, wherein a first input variation is provided by segmenting the image and then masking the macro feature, and a second input variation is provided by masking the macro feature and then segmenting the image.

17. A system for determining an apparent skin age of a person, comprising: a non-transitory computer readable storage medium with logic stored thereon, wherein the logic causes the system to:

a) receive a digital image comprising a human face;

b) process the digital image for analysis, wherein processing comprises locating the human face in the digital image and masking at least one macro feature present on the face;

c) segmenting the digital image into two or more image segments;

d) scaling the segmented digital image such that the full height of the facial image does not exceed 800 pixels;

e) bounding the digital image in a bounding box to remove at least one of a background feature and a macro feature;

f) analyze the processed image using a convolutional neural network (CNN) comprising a discrete deep neural network for each image segment, wherein each deep neural network is trained to identify at least one pixel disposed in a facial micro feature that is indicative of the person's skin age, and wherein each discrete neural network generates an apparent skin age for a region of interest in its respective image segment;

g) determine with the CNN an overall apparent skin age of the person based on the apparent skin age from each deep neural network; and

h) display the overall apparent skin age on a display device visible to a user.

18. The system of

claim 17

, wherein the human face is located in the digital image using a Viola-Jones weak cascade technique.

US15/993,950 2017-05-31 2018-05-31 Systems and methods for determining apparent skin age Active 2038-11-12 US10818007B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/993,950 US10818007B2 (en) 2017-05-31 2018-05-31 Systems and methods for determining apparent skin age

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762513186P 2017-05-31 2017-05-31
US15/993,950 US10818007B2 (en) 2017-05-31 2018-05-31 Systems and methods for determining apparent skin age

Publications (2)

Publication Number Publication Date
US20180350071A1 US20180350071A1 (en) 2018-12-06
US10818007B2 true US10818007B2 (en) 2020-10-27

Family

ID=62713101

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/993,950 Active 2038-11-12 US10818007B2 (en) 2017-05-31 2018-05-31 Systems and methods for determining apparent skin age

Country Status (6)

Country Link
US (1) US10818007B2 (en)
EP (1) EP3631679B1 (en)
JP (1) JP6849825B2 (en)
KR (1) KR102297301B1 (en)
CN (1) CN110709856B (en)
WO (1) WO2018222808A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11553872B2 (en) * 2018-12-04 2023-01-17 L'oreal Automatic image-based skin diagnostics using deep learning
US20230196835A1 (en) * 2021-12-16 2023-06-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining dark eye circles
WO2023114594A1 (en) * 2021-12-16 2023-06-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining skin oiliness
US11810277B2 (en) * 2018-07-20 2023-11-07 Huawei Technologies Co., Ltd. Image acquisition method, apparatus, and terminal
US12230062B2 (en) * 2021-12-16 2025-02-18 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining dark eye circles

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11055762B2 (en) 2016-03-21 2021-07-06 The Procter & Gamble Company Systems and methods for providing customized product recommendations
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
CN110678875B (en) 2017-05-31 2023-07-11 宝洁公司 System and method for guiding a user to take a self-photograph
US10990858B2 (en) * 2018-01-05 2021-04-27 L'oreal Machine-implemented facial health and beauty assistant
US11961224B2 (en) * 2019-01-04 2024-04-16 Stella Surgical Device for the qualitative evaluation of human organs
US12016696B2 (en) * 2019-01-04 2024-06-25 Stella Surgical Device for the qualitative evaluation of human organs
US10891331B1 (en) 2019-02-07 2021-01-12 Pinterest, Inc. Skin tone filter
EP3928327A1 (en) 2019-02-19 2021-12-29 Johnson & Johnson Consumer Inc. Use of artificial intelligence to identify novel targets and methodologies for skin care treatment
KR102653079B1 (en) 2019-04-23 2024-04-01 더 프록터 앤드 갬블 캄파니 Apparatus and method for measuring cosmetic skin properties
EP3959724A1 (en) 2019-04-23 2022-03-02 The Procter & Gamble Company Apparatus and method for visualizing cosmetic skin attributes
WO2021003574A1 (en) * 2019-07-10 2021-01-14 Jiang Ruowei Systems and methods to process images for skin analysis and to visualize skin analysis
KR102746770B1 (en) * 2019-07-10 2024-12-30 루오웨이 지앙 Systems and methods for processing images for skin analysis and visualizing skin analysis
US20210056357A1 (en) * 2019-08-19 2021-02-25 Board Of Trustees Of Michigan State University Systems and methods for implementing flexible, input-adaptive deep learning neural networks
US20240050025A1 (en) * 2019-10-03 2024-02-15 Pablo Prieto Detection and treatment of dermatological conditions
US11657481B2 (en) * 2019-11-18 2023-05-23 Shinyfields Limited Systems and methods for selective enhancement of skin features in images
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
KR102264315B1 (en) 2020-01-09 2021-06-14 한국에스케이에프씰 주식회사 Multi-turn Absolute Torque Angle Sensor Module
US10811138B1 (en) * 2020-03-11 2020-10-20 Memorial Sloan Kettering Cancer Center Parameter selection model using image analysis
KR102167185B1 (en) * 2020-03-18 2020-10-19 이승락 Skin test method and cosmetic composition manufacturing method using the same
FR3108250A1 (en) * 2020-03-23 2021-09-24 Ieva PROCESS FOR DETERMINING A BEAUTY, DERMATOLOGICAL OR HAIR RITUAL ASSOCIATED WITH A SPECIFIC USER
CN113536871A (en) * 2020-04-17 2021-10-22 纽顿医学美容集团有限公司 Image preprocessing method, facial skin age estimation method and electronic device
CN111582057B (en) * 2020-04-20 2022-02-15 东南大学 Face verification method based on local receptive field
CN113761985A (en) * 2020-06-05 2021-12-07 中国科学院上海营养与健康研究所 Method and apparatus for determining localized areas affecting the degree of facial aging
US20220122354A1 (en) * 2020-06-19 2022-04-21 Pinterest, Inc. Skin tone determination and filtering
CN111767858B (en) * 2020-06-30 2024-03-22 北京百度网讯科技有限公司 Image recognition method, device, equipment and computer storage medium
WO2022002964A1 (en) * 2020-06-30 2022-01-06 L'oréal High-resolution controllable face aging with spatially-aware conditional gans
KR20220055018A (en) 2020-10-26 2022-05-03 딤포 몰라테리 Method and apparatus for recommending skin products using facial images
FR3118241B1 (en) * 2020-12-23 2024-08-09 Oreal APPLICATION OF A CONTINUOUS EFFECT VIA CLASS INCORPORATIONS ESTIMATED BY THE MODEL
EP4256535A1 (en) 2020-12-23 2023-10-11 L'oreal Applying a continuous effect via model-estimated class embeddings
WO2022150449A1 (en) 2021-01-11 2022-07-14 The Procter & Gamble Company Dermatological imaging systems and methods for generating three-dimensional (3d) image models
FR3125407A1 (en) * 2021-07-23 2023-01-27 L'oreal PREDICTION OF AGING TREATMENT RESULTS BASED ON AGEOTYPE
WO2022232627A1 (en) * 2021-04-30 2022-11-03 L'oreal Predicting aging treatment outcomes based on a skin ageotype
CN117355875A (en) * 2021-05-20 2024-01-05 伊卡美学导航股份有限公司 Computer-based body part analysis method and system
KR102465456B1 (en) * 2021-08-13 2022-11-11 주식회사 에이아이네이션 Personalized makeup recommendation method and device through artificial intelligence-based facial age and wrinkle analysis
KR102406377B1 (en) * 2021-08-13 2022-06-09 주식회사 에이아이네이션 Artificial intelligence-based virtual makeup method and device that can control the degree of makeup transfer for each face part
KR102436127B1 (en) 2021-09-03 2022-08-26 주식회사 룰루랩 Method and apparatus for detecting wrinkles based on artificial neural network
KR20230046210A (en) * 2021-09-29 2023-04-05 주식회사 엘지생활건강 Age estimation device
US20230274830A1 (en) * 2022-02-25 2023-08-31 Realfacevalue B.V. Method for perceptive traits based semantic face image manipulation and aesthetic treatment recommendation
US11816144B2 (en) 2022-03-31 2023-11-14 Pinterest, Inc. Hair pattern determination and filtering
KR102530149B1 (en) * 2022-07-01 2023-05-08 김수동 Diagnostic method of face skin
JP7605883B2 (en) 2022-08-04 2024-12-24 花王株式会社 Skin condition estimation method
WO2024029600A1 (en) * 2022-08-04 2024-02-08 花王株式会社 Skin condition estimating method
GB2632164A (en) * 2023-07-27 2025-01-29 Boots Co Plc Radiance measurement method and system
CN117746479A (en) * 2023-12-20 2024-03-22 相玉科技(北京)有限公司 Visualization method and device for image recognition, electronic equipment and medium
CN117765597A (en) * 2023-12-28 2024-03-26 相玉科技(北京)有限公司 Face difference visualization method, device, electronic equipment and readable medium

Citations (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276570A (en) 1979-05-08 1981-06-30 Nancy Burson Method and apparatus for producing an image of a person's face at a different age
US5850463A (en) 1995-06-16 1998-12-15 Seiko Epson Corporation Facial image processing method and facial image processing apparatus
US5983120A (en) 1995-10-23 1999-11-09 Cytometrics, Inc. Method and apparatus for reflected imaging analysis
WO2000076398A1 (en) 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
US20010037191A1 (en) 2000-03-15 2001-11-01 Infiniteface Inc. Three-dimensional beauty simulation client-server system
EP1297781A1 (en) 2001-10-01 2003-04-02 L'oreal Early detection of beauty treatment progress
US20030065255A1 (en) 2001-10-01 2003-04-03 Daniela Giacchetti Simulation of an aesthetic feature on a facial image
US20030065589A1 (en) 2001-10-01 2003-04-03 Daniella Giacchetti Body image templates with pre-applied beauty products
US6556196B1 (en) 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
WO2003049039A1 (en) 2001-12-01 2003-06-12 University College London Performance-driven facial animation techniques
US6619860B1 (en) 1997-11-14 2003-09-16 Eastman Kodak Company Photobooth for producing digitally processed images
US20030198402A1 (en) 2002-04-18 2003-10-23 Zhengyou Zhang System and method for image-based surface detail transfer
US6734858B2 (en) 2000-12-27 2004-05-11 Avon Products, Inc. Method and apparatus for use of computer aging to demonstrate a product benefit
US20040122299A1 (en) 2002-12-24 2004-06-24 Yasutaka Nakata Method for the skin analysis
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
US20040170337A1 (en) 2003-02-28 2004-09-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US20040213454A1 (en) 2003-04-28 2004-10-28 Industrial Technology Research Institute Statistical facial feature extraction method
US20040223631A1 (en) 2003-05-07 2004-11-11 Roman Waupotitsch Face recognition based on obtaining two dimensional information from three-dimensional face shapes
US6959119B2 (en) 2000-11-03 2005-10-25 Unilever Home & Personal Care Usa Method of evaluating cosmetic products on a consumer with future predictive transformation
WO2006005917A1 (en) 2004-07-08 2006-01-19 University Of Kent Plausible ageing of the human face
US20060023923A1 (en) 2004-06-10 2006-02-02 Geng Zheng J Method and system for a three dimensional facial recognition system
GB2424332A (en) 2004-12-13 2006-09-20 Mark Charles Spittle An image processing system
US20060257041A1 (en) 2005-05-16 2006-11-16 Fuji Photo Film Co., Ltd. Apparatus, method, and program for image processing
CN1870047A (en) 2006-06-15 2006-11-29 西安交通大学 Human face image age changing method based on average face and senile proportional image
US20060274071A1 (en) 2004-09-29 2006-12-07 L'oreal Method of predicting the appearance of at least a portion of an individual's body
JP2007050158A (en) 2005-08-19 2007-03-01 Utsunomiya Univ Skin image processing method and processing apparatus, and skin age estimation method using the same
US20070052726A1 (en) 2005-09-08 2007-03-08 David Wright Method and system for likeness reconstruction
US20070053940A1 (en) 2003-04-23 2007-03-08 Kelly Huang Method of Measuring the Efficacy of a Skin Treatment Program
US20070070440A1 (en) 2005-09-27 2007-03-29 Fuji Photo Film Co., Ltd. Image processing method, image processing apparatus, and computer-readable recording medium storing image processing program
US20070071314A1 (en) 2005-09-09 2007-03-29 Nina Bhatti Capture and systematic use of expert color analysis
WO2007044815A2 (en) 2005-10-11 2007-04-19 Animetrics Inc. Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects
US20070104472A1 (en) 2005-11-08 2007-05-10 Shuxue Quan Skin color prioritized automatic focus control via sensor-dependent skin color detection
WO2007051299A1 (en) 2005-11-04 2007-05-10 Cryos Technology, Inc. Surface analysis method and system
US20070229498A1 (en) 2006-03-29 2007-10-04 Wojciech Matusik Statistical modeling for synthesis of detailed facial geometry
WO2008003146A2 (en) 2006-07-03 2008-01-10 Ilan Karavani Method for determination of the type of skin of the face of a person and method for the determination of the aging of the skin of person' s face.
US20080080746A1 (en) 2006-10-02 2008-04-03 Gregory Payonk Method and Apparatus for Identifying Facial Regions
US20080089561A1 (en) 2006-10-11 2008-04-17 Tong Zhang Face-based image clustering
US7362886B2 (en) * 2003-06-05 2008-04-22 Canon Kabushiki Kaisha Age-based face recognition
WO2008086311A2 (en) 2007-01-05 2008-07-17 Myskin, Inc. System, device and method for dermal imaging
US20080212894A1 (en) 2007-03-02 2008-09-04 Ramazan Demirli Method and apparatus for simulation of facial skin aging and de-aging
US20080316227A1 (en) 2007-06-11 2008-12-25 Darwin Dimensions Inc. User defined characteristics for inheritance based avatar generation
US20090003709A1 (en) 2007-06-29 2009-01-01 Canon Kabushiki Kaisha Image processing apparatus and method, and storage medium
US20090028380A1 (en) 2007-07-23 2009-01-29 Hillebrand Greg Method and apparatus for realistic simulation of wrinkle aging and de-aging
WO2009100494A1 (en) 2008-02-14 2009-08-20 Gregory Goodman System and method of cosmetic analysis and treatment diagnosis
US20090245603A1 (en) 2007-01-05 2009-10-01 Djuro Koruga System and method for analysis of light-matter interaction based on spectral convolution
CN101556699A (en) 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
US7634103B2 (en) 2001-10-01 2009-12-15 L'oreal S.A. Analysis using a three-dimensional facial image
EP1030267B1 (en) 1997-03-06 2010-01-27 DRDC limited Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
EP1813189B1 (en) 2004-10-22 2010-03-03 Shiseido Company, Ltd. Skin condition diagnostic system and beauty counseling system
US20100068247A1 (en) 2008-09-16 2010-03-18 Tsung-Wei Robert Mou Method And System For Providing Targeted And Individualized Delivery Of Cosmetic Actives
US20100172567A1 (en) 2007-04-17 2010-07-08 Prokoski Francine J System and method for using three dimensional infrared imaging to provide detailed anatomical structure maps
US20100185064A1 (en) 2007-01-05 2010-07-22 Jadran Bandic Skin analysis methods
US20100189342A1 (en) 2000-03-08 2010-07-29 Cyberextruder.Com, Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20110016001A1 (en) 2006-11-08 2011-01-20 24/8 Llc Method and apparatus for recommending beauty-related products
US20110064331A1 (en) 2007-02-06 2011-03-17 Accenture Global Services Gmbh Transforming A Submitted Image Of A Person Based On A Condition Of The Person
US20110116691A1 (en) * 2009-11-13 2011-05-19 Chung Pao-Choo Facial skin defect resolution system, method and computer program product
US20110158540A1 (en) 2009-12-24 2011-06-30 Canon Kabushiki Kaisha Pattern recognition method and pattern recognition apparatus
US20110196616A1 (en) 2009-12-02 2011-08-11 Conopco, Inc., D/B/A Unilever Apparatus for and method of measuring perceived age
WO2011109168A1 (en) 2010-03-03 2011-09-09 Eastman Kodak Company Imaging device for capturing self-portrait images
US20110222724A1 (en) 2010-03-15 2011-09-15 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
US20110249891A1 (en) 2010-04-07 2011-10-13 Jia Li Ethnicity Classification Using Multiple Features
WO2011146321A2 (en) 2010-05-18 2011-11-24 Elc Management Llc Method and system for automatic or manual evaluation to provide targeted and individualized delivery of cosmetic actives in a mask or patch form
US20110300196A1 (en) 2008-09-16 2011-12-08 Fatemeh Mohammadi Method And System For Automatic Or Manual Evaluation To Provide Targeted And Individualized Delivery Of Cosmetic Actives In A Mask Or Patch Form
US8077931B1 (en) 2006-07-14 2011-12-13 Chatman Andrew S Method and apparatus for determining facial characteristics
US8254647B1 (en) 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US20120223131A1 (en) 2011-03-03 2012-09-06 Lim John W Method and apparatus for dynamically presenting content in response to successive scans of a static code
US20120253755A1 (en) 2011-03-30 2012-10-04 Gobel David P Method Of Obtaining The Age Quotient Of A Person
US20120300049A1 (en) 2009-06-22 2012-11-29 Courage + Khazaka Electronic Gmbh Method for determining age, and age-dependent selection of cosmetic products
US20130013330A1 (en) 2009-12-30 2013-01-10 Natura Cosmeticos S.A. Method for assessment of aesthetic and morphological conditions of the skin and prescription of cosmetic and/or dermatological treatment
US20130029723A1 (en) 2011-07-28 2013-01-31 Qualcomm Innovation Center, Inc. User distance detection for enhanced interaction with a mobile device
US20130041733A1 (en) 2011-08-11 2013-02-14 Reise Officer System, method, and computer program product for tip sharing using social networking
US8401300B2 (en) 2009-12-14 2013-03-19 Conopco, Inc. Targeted image transformation of skin attribute
US20130079620A1 (en) 2010-02-03 2013-03-28 Siemens Ag Method for processing an endoscopy image
US20130089245A1 (en) * 2010-06-21 2013-04-11 Pola Chemical Industries, Inc. Age estimation method and sex determination method
US20130094780A1 (en) 2010-06-01 2013-04-18 Hewlett-Packard Development Company, L.P. Replacement of a Person or Object in an Image
US20130158968A1 (en) 2011-12-16 2013-06-20 Cerner Innovation, Inc. Graphic representations of health-related status
US20130169621A1 (en) 2011-12-28 2013-07-04 Li Mei Method of creating and transforming a face model and related system
WO2013104015A1 (en) 2012-01-11 2013-07-18 Steven Liew A method and apparatus for facial aging assessment and treatment management
US8520906B1 (en) 2007-09-24 2013-08-27 Videomining Corporation Method and system for age estimation based on relative ages of pairwise facial images of people
US8550818B2 (en) 2010-05-21 2013-10-08 Photometria, Inc. System and method for providing and modifying a personalized face chart
US20130271451A1 (en) 2011-08-09 2013-10-17 Xiaofeng Tong Parameterized 3d face generation
US20130325493A1 (en) 2012-05-29 2013-12-05 Medical Avatar Llc System and method for managing past, present, and future states of health using personalized 3-d anatomical models
US8666770B2 (en) 2008-09-04 2014-03-04 Elc Management, Llc Objective model of apparent age, methods and use
US20140089017A1 (en) 2012-09-27 2014-03-27 United Video Properties, Inc. Systems and methods for operating an entertainment control system
US20140099029A1 (en) 2012-10-05 2014-04-10 Carnegie Mellon University Face Age-Estimation and Methods, Systems, and Software Therefor
EP2728511A1 (en) 2012-11-01 2014-05-07 Samsung Electronics Co., Ltd Apparatus and method for face recognition
US8725560B2 (en) 2009-02-02 2014-05-13 Modiface Inc. Method and system for simulated product evaluation via personalizing advertisements based on portrait images
KR20140078459A (en) 2012-12-17 2014-06-25 최봉우 None
US20140201126A1 (en) 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
US20140209682A1 (en) 2013-01-25 2014-07-31 Hewlett-Packard Development Company, L.P. Characterization of color charts
US20140211022A1 (en) 2013-01-30 2014-07-31 Hewlett-Packard Development Company, L.P. Acquisition of color calibration charts
US20140219526A1 (en) 2013-02-05 2014-08-07 Children's National Medical Center Device and method for classifying a condition based on image analysis
US20140226896A1 (en) 2011-07-07 2014-08-14 Kao Corporation Face impression analyzing method, aesthetic counseling method, and face image generating method
WO2014122253A2 (en) 2013-02-07 2014-08-14 Crisalix Sa 3d platform for aesthetic simulation
US20140270490A1 (en) 2013-03-13 2014-09-18 Futurewei Technologies, Inc. Real-Time Face Detection Using Combinations of Local and Global Features
US20140304629A1 (en) 2013-04-09 2014-10-09 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US20140323873A1 (en) 2013-04-09 2014-10-30 Elc Management Llc Skin diagnostic and image processing methods
US20140334723A1 (en) 2012-01-31 2014-11-13 Ehud Chatow Identification Mark with a Predetermined Color Difference
WO2015017687A2 (en) 2013-07-31 2015-02-05 Cosmesys Inc. Systems and methods for producing predictive images
US20150045631A1 (en) 2013-03-15 2015-02-12 Lee Pederson Skin health system
CN104504376A (en) 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
US20150099947A1 (en) 2013-10-04 2015-04-09 Access Business Group International Llc Skin youthfulness index, methods and applications thereof
WO2015088079A1 (en) 2013-12-12 2015-06-18 주식회사 바이오코즈글로벌코리아 System for producing customized individual cosmetics
US20150178554A1 (en) 2013-12-19 2015-06-25 Objectvideo, Inc. System and method for identifying faces in unconstrained media
US20150310040A1 (en) 2014-04-29 2015-10-29 Microsoft Corporation Grouping and ranking images based on facial recognition data
US20150339757A1 (en) 2014-05-20 2015-11-26 Parham Aarabi Method, system and computer program product for generating recommendations for products and treatments
US20160062456A1 (en) 2013-05-17 2016-03-03 Nokia Technologies Oy Method and apparatus for live user recognition
US20160162728A1 (en) 2013-07-31 2016-06-09 Panasonic Intellectual Property Corporation Of America Skin analysis method, skin analysis device, and method for controlling skin analysis device
US20160219217A1 (en) 2015-01-22 2016-07-28 Apple Inc. Camera Field Of View Effects Based On Device Orientation And Scene Content
US20160255303A1 (en) 2013-09-24 2016-09-01 Sharp Kabushiki Kaisha Image display apparatus and image processing device
US20160292380A1 (en) 2013-11-22 2016-10-06 Amorepacific Corporation Device and method for predicting skin age by using quantifying means
US20160314616A1 (en) 2015-04-23 2016-10-27 Sungwook Su 3d identification system with facial forecast
US20160330370A1 (en) 2014-11-13 2016-11-10 Intel Corporation Image quality compensation system and method
US20170032178A1 (en) 2015-07-30 2017-02-02 Google Inc. Personalizing image capture
US20170039357A1 (en) 2015-08-03 2017-02-09 Samsung Electronics Co., Ltd. Multi-modal fusion method for user authentication and user authentication method
WO2017029488A2 (en) 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
US20170178058A1 (en) 2015-12-18 2017-06-22 Ricoh Co., Ltd. Index Image Quality Metric
US20170246473A1 (en) 2016-02-25 2017-08-31 Sava Marinkovich Method and system for managing treatments
US20170272741A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and apparatus for determining spectral characteristics of an image captured by a camera on a mobile endpoint device
US20170270593A1 (en) * 2016-03-21 2017-09-21 The Procter & Gamble Company Systems and Methods For Providing Customized Product Recommendations
US20170270349A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and apparatus for generating graphical chromophore maps
US20170270348A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Interactive display for facial skin monitoring
US20170270691A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and system for generating accurate graphical chromophore maps
US20170270350A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and system for assessing facial skin health from a mobile selfie image
US20170294010A1 (en) 2016-04-12 2017-10-12 Adobe Systems Incorporated Utilizing deep learning for rating aesthetics of digital images
US20170308738A1 (en) 2014-09-19 2017-10-26 Zte Corporation Face recognition method, device and computer readable storage medium
US20180276869A1 (en) 2017-03-21 2018-09-27 The Procter & Gamble Company Methods For Age Appearance Simulation
US20180276883A1 (en) 2017-03-21 2018-09-27 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US20180352150A1 (en) 2017-05-31 2018-12-06 The Procter & Gamble Company System And Method For Guiding A User To Take A Selfie

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4303092B2 (en) * 2003-11-12 2009-07-29 株式会社国際電気通信基礎技術研究所 Age estimation apparatus, age estimation method, and age estimation program
JP5067224B2 (en) * 2008-03-24 2012-11-07 セイコーエプソン株式会社 Object detection apparatus, object detection method, object detection program, and printing apparatus
JP2012053813A (en) * 2010-09-03 2012-03-15 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
US20120288168A1 (en) * 2011-05-09 2012-11-15 Telibrahma Convergent Communications Pvt. Ltd. System and a method for enhancing appeareance of a face
WO2014144275A2 (en) * 2013-03-15 2014-09-18 Skin Republic, Inc. Systems and methods for specifying and formulating customized topical agents
KR20160061856A (en) * 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
CN104573673A (en) * 2015-01-28 2015-04-29 广州远信网络科技发展有限公司 Face image age recognition method
JP6058722B2 (en) * 2015-03-17 2017-01-11 株式会社ジェイメック Skin image analysis apparatus, image processing apparatus, and computer program
CN105760850B (en) * 2016-03-11 2019-02-15 重庆医科大学 A non-invasive age estimation method based on skin texture information
CN105844236B (en) * 2016-03-22 2019-09-06 重庆医科大学 Age testing method based on skin image information processing
CN106203306A (en) * 2016-06-30 2016-12-07 北京小米移动软件有限公司 The Forecasting Methodology at age, device and terminal
CN106469298A (en) * 2016-08-31 2017-03-01 乐视控股(北京)有限公司 Age recognition methodss based on facial image and device
CN106529402B (en) * 2016-09-27 2019-05-28 中国科学院自动化研究所 The face character analysis method of convolutional neural networks based on multi-task learning

Patent Citations (145)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276570A (en) 1979-05-08 1981-06-30 Nancy Burson Method and apparatus for producing an image of a person's face at a different age
US5850463A (en) 1995-06-16 1998-12-15 Seiko Epson Corporation Facial image processing method and facial image processing apparatus
US5983120A (en) 1995-10-23 1999-11-09 Cytometrics, Inc. Method and apparatus for reflected imaging analysis
EP1030267B1 (en) 1997-03-06 2010-01-27 DRDC limited Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
US6619860B1 (en) 1997-11-14 2003-09-16 Eastman Kodak Company Photobooth for producing digitally processed images
US6556196B1 (en) 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US6571003B1 (en) 1999-06-14 2003-05-27 The Procter & Gamble Company Skin imaging and analysis systems and methods
WO2000076398A1 (en) 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
EP1189536B1 (en) 1999-06-14 2011-03-23 The Procter & Gamble Company Skin imaging and analysis methods
US20100189342A1 (en) 2000-03-08 2010-07-29 Cyberextruder.Com, Inc. System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images
US20010037191A1 (en) 2000-03-15 2001-11-01 Infiniteface Inc. Three-dimensional beauty simulation client-server system
US6959119B2 (en) 2000-11-03 2005-10-25 Unilever Home & Personal Care Usa Method of evaluating cosmetic products on a consumer with future predictive transformation
US6734858B2 (en) 2000-12-27 2004-05-11 Avon Products, Inc. Method and apparatus for use of computer aging to demonstrate a product benefit
US20030065255A1 (en) 2001-10-01 2003-04-03 Daniela Giacchetti Simulation of an aesthetic feature on a facial image
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
US7634103B2 (en) 2001-10-01 2009-12-15 L'oreal S.A. Analysis using a three-dimensional facial image
US20030065589A1 (en) 2001-10-01 2003-04-03 Daniella Giacchetti Body image templates with pre-applied beauty products
EP1297781A1 (en) 2001-10-01 2003-04-02 L'oreal Early detection of beauty treatment progress
WO2003049039A1 (en) 2001-12-01 2003-06-12 University College London Performance-driven facial animation techniques
US20030198402A1 (en) 2002-04-18 2003-10-23 Zhengyou Zhang System and method for image-based surface detail transfer
US7200281B2 (en) 2002-04-18 2007-04-03 Microsoft Corp. System and method for image-based surface detail transfer
US20040122299A1 (en) 2002-12-24 2004-06-24 Yasutaka Nakata Method for the skin analysis
US20040170337A1 (en) 2003-02-28 2004-09-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US20070053940A1 (en) 2003-04-23 2007-03-08 Kelly Huang Method of Measuring the Efficacy of a Skin Treatment Program
US20040213454A1 (en) 2003-04-28 2004-10-28 Industrial Technology Research Institute Statistical facial feature extraction method
US20040223631A1 (en) 2003-05-07 2004-11-11 Roman Waupotitsch Face recognition based on obtaining two dimensional information from three-dimensional face shapes
US7362886B2 (en) * 2003-06-05 2008-04-22 Canon Kabushiki Kaisha Age-based face recognition
US20060023923A1 (en) 2004-06-10 2006-02-02 Geng Zheng J Method and system for a three dimensional facial recognition system
WO2006005917A1 (en) 2004-07-08 2006-01-19 University Of Kent Plausible ageing of the human face
US20060274071A1 (en) 2004-09-29 2006-12-07 L'oreal Method of predicting the appearance of at least a portion of an individual's body
EP1813189B1 (en) 2004-10-22 2010-03-03 Shiseido Company, Ltd. Skin condition diagnostic system and beauty counseling system
US8094186B2 (en) 2004-10-22 2012-01-10 Shiseido Company, Ltd. Skin condition diagnosis system and beauty counseling system
GB2424332A (en) 2004-12-13 2006-09-20 Mark Charles Spittle An image processing system
US20060257041A1 (en) 2005-05-16 2006-11-16 Fuji Photo Film Co., Ltd. Apparatus, method, and program for image processing
JP2007050158A (en) 2005-08-19 2007-03-01 Utsunomiya Univ Skin image processing method and processing apparatus, and skin age estimation method using the same
US20070052726A1 (en) 2005-09-08 2007-03-08 David Wright Method and system for likeness reconstruction
US20070071314A1 (en) 2005-09-09 2007-03-29 Nina Bhatti Capture and systematic use of expert color analysis
US20070070440A1 (en) 2005-09-27 2007-03-29 Fuji Photo Film Co., Ltd. Image processing method, image processing apparatus, and computer-readable recording medium storing image processing program
WO2007044815A2 (en) 2005-10-11 2007-04-19 Animetrics Inc. Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects
WO2007051299A1 (en) 2005-11-04 2007-05-10 Cryos Technology, Inc. Surface analysis method and system
US20070104472A1 (en) 2005-11-08 2007-05-10 Shuxue Quan Skin color prioritized automatic focus control via sensor-dependent skin color detection
US20070229498A1 (en) 2006-03-29 2007-10-04 Wojciech Matusik Statistical modeling for synthesis of detailed facial geometry
CN1870047A (en) 2006-06-15 2006-11-29 西安交通大学 Human face image age changing method based on average face and senile proportional image
WO2008003146A2 (en) 2006-07-03 2008-01-10 Ilan Karavani Method for determination of the type of skin of the face of a person and method for the determination of the aging of the skin of person' s face.
US8077931B1 (en) 2006-07-14 2011-12-13 Chatman Andrew S Method and apparatus for determining facial characteristics
US20080080746A1 (en) 2006-10-02 2008-04-03 Gregory Payonk Method and Apparatus for Identifying Facial Regions
US20080089561A1 (en) 2006-10-11 2008-04-17 Tong Zhang Face-based image clustering
US20110016001A1 (en) 2006-11-08 2011-01-20 24/8 Llc Method and apparatus for recommending beauty-related products
US20090245603A1 (en) 2007-01-05 2009-10-01 Djuro Koruga System and method for analysis of light-matter interaction based on spectral convolution
US20080194928A1 (en) 2007-01-05 2008-08-14 Jadran Bandic System, device, and method for dermal imaging
WO2008086311A2 (en) 2007-01-05 2008-07-17 Myskin, Inc. System, device and method for dermal imaging
US20100185064A1 (en) 2007-01-05 2010-07-22 Jadran Bandic Skin analysis methods
US8014589B2 (en) 2007-02-06 2011-09-06 Accenture Global Services Limited Transforming a submitted image of a person based on a condition of the person
US20110064331A1 (en) 2007-02-06 2011-03-17 Accenture Global Services Gmbh Transforming A Submitted Image Of A Person Based On A Condition Of The Person
US20080212894A1 (en) 2007-03-02 2008-09-04 Ramazan Demirli Method and apparatus for simulation of facial skin aging and de-aging
US20100172567A1 (en) 2007-04-17 2010-07-08 Prokoski Francine J System and method for using three dimensional infrared imaging to provide detailed anatomical structure maps
US20080316227A1 (en) 2007-06-11 2008-12-25 Darwin Dimensions Inc. User defined characteristics for inheritance based avatar generation
US20090003709A1 (en) 2007-06-29 2009-01-01 Canon Kabushiki Kaisha Image processing apparatus and method, and storage medium
US8391639B2 (en) 2007-07-23 2013-03-05 The Procter & Gamble Company Method and apparatus for realistic simulation of wrinkle aging and de-aging
US20090028380A1 (en) 2007-07-23 2009-01-29 Hillebrand Greg Method and apparatus for realistic simulation of wrinkle aging and de-aging
US8520906B1 (en) 2007-09-24 2013-08-27 Videomining Corporation Method and system for age estimation based on relative ages of pairwise facial images of people
US20100329525A1 (en) 2008-02-14 2010-12-30 Gregory Goodman System and method of cosmetic analysis and treatment diagnosis
US8625864B2 (en) 2008-02-14 2014-01-07 Gregory Goodman System and method of cosmetic analysis and treatment diagnosis
WO2009100494A1 (en) 2008-02-14 2009-08-20 Gregory Goodman System and method of cosmetic analysis and treatment diagnosis
US8666770B2 (en) 2008-09-04 2014-03-04 Elc Management, Llc Objective model of apparent age, methods and use
US20110300196A1 (en) 2008-09-16 2011-12-08 Fatemeh Mohammadi Method And System For Automatic Or Manual Evaluation To Provide Targeted And Individualized Delivery Of Cosmetic Actives In A Mask Or Patch Form
US8491926B2 (en) 2008-09-16 2013-07-23 Elc Management Llc Method and system for automatic or manual evaluation to provide targeted and individualized delivery of cosmetic actives in a mask or patch form
US20100068247A1 (en) 2008-09-16 2010-03-18 Tsung-Wei Robert Mou Method And System For Providing Targeted And Individualized Delivery Of Cosmetic Actives
US20120325141A1 (en) 2008-09-16 2012-12-27 Fatemah Mohammadi Method and System for Providing Targeted and Individualized Delivery of Cosmetic Actives
US8425477B2 (en) 2008-09-16 2013-04-23 Elc Management Llc Method and system for providing targeted and individualized delivery of cosmetic actives
CN101556699A (en) 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
US8725560B2 (en) 2009-02-02 2014-05-13 Modiface Inc. Method and system for simulated product evaluation via personalizing advertisements based on portrait images
US20120300049A1 (en) 2009-06-22 2012-11-29 Courage + Khazaka Electronic Gmbh Method for determining age, and age-dependent selection of cosmetic products
US9013567B2 (en) 2009-06-22 2015-04-21 Courage + Khazaka Electronic Gmbh Method for determining age, and age-dependent selection of cosmetic products
US20110116691A1 (en) * 2009-11-13 2011-05-19 Chung Pao-Choo Facial skin defect resolution system, method and computer program product
US20110196616A1 (en) 2009-12-02 2011-08-11 Conopco, Inc., D/B/A Unilever Apparatus for and method of measuring perceived age
US8401300B2 (en) 2009-12-14 2013-03-19 Conopco, Inc. Targeted image transformation of skin attribute
US20110158540A1 (en) 2009-12-24 2011-06-30 Canon Kabushiki Kaisha Pattern recognition method and pattern recognition apparatus
US20130013330A1 (en) 2009-12-30 2013-01-10 Natura Cosmeticos S.A. Method for assessment of aesthetic and morphological conditions of the skin and prescription of cosmetic and/or dermatological treatment
US20130079620A1 (en) 2010-02-03 2013-03-28 Siemens Ag Method for processing an endoscopy image
WO2011109168A1 (en) 2010-03-03 2011-09-09 Eastman Kodak Company Imaging device for capturing self-portrait images
US8582807B2 (en) 2010-03-15 2013-11-12 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
US20110222724A1 (en) 2010-03-15 2011-09-15 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
US20110249891A1 (en) 2010-04-07 2011-10-13 Jia Li Ethnicity Classification Using Multiple Features
WO2011146321A2 (en) 2010-05-18 2011-11-24 Elc Management Llc Method and system for automatic or manual evaluation to provide targeted and individualized delivery of cosmetic actives in a mask or patch form
US8550818B2 (en) 2010-05-21 2013-10-08 Photometria, Inc. System and method for providing and modifying a personalized face chart
US20130094780A1 (en) 2010-06-01 2013-04-18 Hewlett-Packard Development Company, L.P. Replacement of a Person or Object in an Image
US20130089245A1 (en) * 2010-06-21 2013-04-11 Pola Chemical Industries, Inc. Age estimation method and sex determination method
US9189679B2 (en) 2010-06-21 2015-11-17 Pola Chemical Industries, Inc. Age estimation method and sex determination method
US20120223131A1 (en) 2011-03-03 2012-09-06 Lim John W Method and apparatus for dynamically presenting content in response to successive scans of a static code
US20120253755A1 (en) 2011-03-30 2012-10-04 Gobel David P Method Of Obtaining The Age Quotient Of A Person
US20140226896A1 (en) 2011-07-07 2014-08-14 Kao Corporation Face impression analyzing method, aesthetic counseling method, and face image generating method
US20130029723A1 (en) 2011-07-28 2013-01-31 Qualcomm Innovation Center, Inc. User distance detection for enhanced interaction with a mobile device
US20130271451A1 (en) 2011-08-09 2013-10-17 Xiaofeng Tong Parameterized 3d face generation
US20130041733A1 (en) 2011-08-11 2013-02-14 Reise Officer System, method, and computer program product for tip sharing using social networking
US20130158968A1 (en) 2011-12-16 2013-06-20 Cerner Innovation, Inc. Graphic representations of health-related status
US20130169621A1 (en) 2011-12-28 2013-07-04 Li Mei Method of creating and transforming a face model and related system
WO2013104015A1 (en) 2012-01-11 2013-07-18 Steven Liew A method and apparatus for facial aging assessment and treatment management
US20140334723A1 (en) 2012-01-31 2014-11-13 Ehud Chatow Identification Mark with a Predetermined Color Difference
US8254647B1 (en) 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US20130325493A1 (en) 2012-05-29 2013-12-05 Medical Avatar Llc System and method for managing past, present, and future states of health using personalized 3-d anatomical models
US20140201126A1 (en) 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
US20140089017A1 (en) 2012-09-27 2014-03-27 United Video Properties, Inc. Systems and methods for operating an entertainment control system
US20140099029A1 (en) 2012-10-05 2014-04-10 Carnegie Mellon University Face Age-Estimation and Methods, Systems, and Software Therefor
EP2728511A1 (en) 2012-11-01 2014-05-07 Samsung Electronics Co., Ltd Apparatus and method for face recognition
KR20140078459A (en) 2012-12-17 2014-06-25 최봉우 None
US20140209682A1 (en) 2013-01-25 2014-07-31 Hewlett-Packard Development Company, L.P. Characterization of color charts
US20140211022A1 (en) 2013-01-30 2014-07-31 Hewlett-Packard Development Company, L.P. Acquisition of color calibration charts
US20140219526A1 (en) 2013-02-05 2014-08-07 Children's National Medical Center Device and method for classifying a condition based on image analysis
WO2014122253A2 (en) 2013-02-07 2014-08-14 Crisalix Sa 3d platform for aesthetic simulation
US20140270490A1 (en) 2013-03-13 2014-09-18 Futurewei Technologies, Inc. Real-Time Face Detection Using Combinations of Local and Global Features
US20150045631A1 (en) 2013-03-15 2015-02-12 Lee Pederson Skin health system
US20140323873A1 (en) 2013-04-09 2014-10-30 Elc Management Llc Skin diagnostic and image processing methods
US20140304629A1 (en) 2013-04-09 2014-10-09 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US20160062456A1 (en) 2013-05-17 2016-03-03 Nokia Technologies Oy Method and apparatus for live user recognition
WO2015017687A2 (en) 2013-07-31 2015-02-05 Cosmesys Inc. Systems and methods for producing predictive images
US20160162728A1 (en) 2013-07-31 2016-06-09 Panasonic Intellectual Property Corporation Of America Skin analysis method, skin analysis device, and method for controlling skin analysis device
US20160255303A1 (en) 2013-09-24 2016-09-01 Sharp Kabushiki Kaisha Image display apparatus and image processing device
US20150099947A1 (en) 2013-10-04 2015-04-09 Access Business Group International Llc Skin youthfulness index, methods and applications thereof
US20160292380A1 (en) 2013-11-22 2016-10-06 Amorepacific Corporation Device and method for predicting skin age by using quantifying means
WO2015088079A1 (en) 2013-12-12 2015-06-18 주식회사 바이오코즈글로벌코리아 System for producing customized individual cosmetics
US20150178554A1 (en) 2013-12-19 2015-06-25 Objectvideo, Inc. System and method for identifying faces in unconstrained media
US20150310040A1 (en) 2014-04-29 2015-10-29 Microsoft Corporation Grouping and ranking images based on facial recognition data
US20150339757A1 (en) 2014-05-20 2015-11-26 Parham Aarabi Method, system and computer program product for generating recommendations for products and treatments
US20170308738A1 (en) 2014-09-19 2017-10-26 Zte Corporation Face recognition method, device and computer readable storage medium
US20160330370A1 (en) 2014-11-13 2016-11-10 Intel Corporation Image quality compensation system and method
CN104504376A (en) 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
US20160219217A1 (en) 2015-01-22 2016-07-28 Apple Inc. Camera Field Of View Effects Based On Device Orientation And Scene Content
US20160314616A1 (en) 2015-04-23 2016-10-27 Sungwook Su 3d identification system with facial forecast
US20170032178A1 (en) 2015-07-30 2017-02-02 Google Inc. Personalizing image capture
US20170039357A1 (en) 2015-08-03 2017-02-09 Samsung Electronics Co., Ltd. Multi-modal fusion method for user authentication and user authentication method
WO2017029488A2 (en) 2015-08-14 2017-02-23 Metail Limited Methods of generating personalized 3d head models or 3d body models
US20190035149A1 (en) * 2015-08-14 2019-01-31 Metail Limited Methods of generating personalized 3d head models or 3d body models
US20170178058A1 (en) 2015-12-18 2017-06-22 Ricoh Co., Ltd. Index Image Quality Metric
US20170246473A1 (en) 2016-02-25 2017-08-31 Sava Marinkovich Method and system for managing treatments
US20170270349A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and apparatus for generating graphical chromophore maps
US20170270348A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Interactive display for facial skin monitoring
US20170270691A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and system for generating accurate graphical chromophore maps
US20170270350A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and system for assessing facial skin health from a mobile selfie image
US20170270593A1 (en) * 2016-03-21 2017-09-21 The Procter & Gamble Company Systems and Methods For Providing Customized Product Recommendations
US20170272741A1 (en) 2016-03-21 2017-09-21 Xerox Corporation Method and apparatus for determining spectral characteristics of an image captured by a camera on a mobile endpoint device
US20170294010A1 (en) 2016-04-12 2017-10-12 Adobe Systems Incorporated Utilizing deep learning for rating aesthetics of digital images
US20180276869A1 (en) 2017-03-21 2018-09-27 The Procter & Gamble Company Methods For Age Appearance Simulation
US20180276883A1 (en) 2017-03-21 2018-09-27 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US20180352150A1 (en) 2017-05-31 2018-12-06 The Procter & Gamble Company System And Method For Guiding A User To Take A Selfie

Non-Patent Citations (95)

* Cited by examiner, † Cited by third party
Title
A. Lanitis, C. Draganova, and C. Christodoulou, "Comparing different classifiers for automatic age estimation," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 34, No. 1, pp. 621-628, 2004.
A. Lanitis, C. J. Taylor, and T. F. Cootes, "Toward automatic simulation of aging effects on face images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 4, pp. 442-455, Apr. 2002.
A. M. Albert, K. Ricanek Jr, and E. Patterson, "A review of the literature on the aging adult skull and face: Implications for forensic science research and applications," Forensic Science International, vol. 172, No. 1, pp. 1-9, 2007.
All Office Actions, U.S. Appl. No. 15/414,002.
All Office Actions, U.S. Appl. No. 15/414,095.
All Office Actions, U.S. Appl. No. 15/414,147.
All Office Actions, U.S. Appl. No. 15/414,189.
All Office Actions, U.S. Appl. No. 15/414,305.
All Office Actions, U.S. Appl. No. 15/465,166.
All Office Actions, U.S. Appl. No. 15/993,950.
All Office Actions, U.S. Appl. No. 15/993,973.
Andreas Lanitis, Comparative Evaluation of Automatic Age-Progression Methodologies, EURASIP Journal on Advances in Signal Processing, vol. 2008, No. 1, Jan. 1, 2008, 10 pages.
B. Guyuron, D. J. Rowe, A. B. Weinfeld, Y. Eshraghi, A. Fathi, and S. Iamphongsai, "Factors contributing to the facial aging of identical twins," Plastic and reconstructive surgery, vol. 123, No. 4, pp. 1321-1331, 2009.
B. Tiddeman, M. Burt, and D. Perrett, "Prototyping and transforming facial textures for perception research," Computer Graphics and Applications, IEEE, vol. 21, No. 5, pp. 42-50, 2001.
Beauty.AI Press Release, PRWeb Online Visibility from Vocus, Nov. 19, 2015, 3 pages.
C. J. Solomon, S. J. Gibson, and others, "A person-specific, rigorous aging model of the human face," Pattern Recognition Letters, vol. 27, No. 15, pp. 1776-1787, 2006.
Chen et al., Face Image Quality Assessment Based on Learning to Rank, IEEE Signal Processing Letters, vol. 22, No. 1 (2015), pp. 90-94.
Crete et al., The blur effect: perception and estimation with a new no-reference perceptual blur metric, Proc. SPIE 6492, Human Vision and Electronic Imaging XII, 2007, 12 pages.
D. Dean, M. G. Hans, F. L. Bookstein, and K. Subramanyan, "Three-dimensional Bolton-Brush Growth Study landmark data: ontogeny and sexual dimorphism of the Bolton standards cohort," 2009.
D. M. Burt and D. I. Perrett, "Perception of age in adult Caucasian male faces: Computer graphic manipulation of shape and colour information," Proceedings of the Royal Society of London. Series B: Biological Sciences, vol. 259, No. 1355, pp. 137-143, 1995.
Dong et al., Automatic age estimation based on deep learning algorithm, Neurocomputing 187 (2016), pp. 4-10.
E. Patterson, K. Ricanek, M. Albert, and E. Boone, "Automatic representation of adult aging in facial images," in Proc. IASTED Int'l Conf. Visualization, Imaging, and Image Processing, 2006, pp. 171-176.
F. Jiang and Y. Wang, "Facial aging simulation based on super-resolution in tensor space," in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, 2008, pp. 1648-1651.
Finlayson et al., Color by Correlation: A Simple, Unifying Framework for Color Constancy, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, No. 11, Nov. 2001, pp. 1209-1221.
Fu et al., Learning Race from Face: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, No. 12, Dec. 1, 2014, pp. 2483-2509.
G. Guo, Y. Fu, C. R. Dyer, and T. S. Huang, "Image-based human age estimation by manifold learning and locally adjusted robust regression," Image Processing, IEEE Transactions on, vol. 17, No. 7, pp. 1178-1188, 2008.
G. Mu, G. Guo, Y. Fu, and T. S. Huang, "Human age estimation using bio-inspired features," in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 2009, pp. 112-119.
Gong et al., Quantification of Pigmentation in Human Skin Images, IEEE, 2012, pp. 2853-2856.
Gray et al., Predicting Facial Beauty without Landmarks, European Conference on Computer Vision, Computer Vision-ECCV 2010, 14 pages.
Gray et al., Predicting Facial Beauty without Landmarks, European Conference on Computer Vision, Computer Vision—ECCV 2010, 14 pages.
Guodong Guo et al., A framework for joint estimation of age, gender and ethnicity on a large database, Image and Vision Computing, vol. 32, No. 10, May 10, 2014, pp. 761-770.
Huerta et al., A deep analysis on age estimation, Pattern Recognition Letters 68 (2015), pp. 239-249.
Hyvarinen et al., A Fast Fixed-Point Algorithm for Independent Component Analysis of Complex Valued Signals, Neural Networks Research Centre, Helsinki University of Technology, Jan. 2000, 15 pages.
Hyvarinen et al., A Fast Fixed-Point Algorithm for Independent Component Analysis, Neural Computation, 9:1483-1492, 1997.
I. Pitanguy, D. Pamplona, H. I. Weber, F. Leta, F. Salgado, and H. N. Radwanski, "Numerical modeling of facial aging," Plastic and reconstructive surgery, vol. 102, No. 1, pp. 200-204, 1998.
I. Pitanguy, F. Leta, D. Pamplona, and H. I. Weber, "Defining and measuring aging parameters," Applied Mathematics and Computation, vol. 78, No. 2-3, pp. 217-227, Sep. 1996.
International Search Report and Written Opinion of the International Searching Authority, PCT/US2017/023334, dated May 15, 2017, 12 pages.
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/023042, dated Jun. 6, 2018.
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/023219, dated Jun. 1, 2018, 13 pages.
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/035291, dated Aug. 30, 2018, 11 pages.
International Search Report and Written Opinion of the International Searching Authority, PCT/US2018/035296, dated Oct. 17, 2018, 17 pages.
J. Gatherwright, M. T. Liu, B. Amirlak, C. Gliniak, A. Totonchi, and B. Guyuron, "The Contribution of Endogenous and Exogenous Factors to Male Alopecia: A Study of Identical Twins," Plastic and reconstructive surgery, vol. 131, No. 5, p. 794e-801e, 2013.
J. H. Langlois and L. A. Roggman, "Attractive faces are only average," Psychological science, vol. 1, No. 2, pp. 115-121, 1990.
J. P. Farkas, J. E. Pessa, B. Hubbard, and R. J. Rohrich, "The science and theory behind facial aging," Plastic and Reconstructive Surgery-Global Open, vol. 1, No. 1, pp. e8-e15, 2013.
J. P. Farkas, J. E. Pessa, B. Hubbard, and R. J. Rohrich, "The science and theory behind facial aging," Plastic and Reconstructive Surgery—Global Open, vol. 1, No. 1, pp. e8-e15, 2013.
J. Suo, F. Min, S. Zhu, S. Shan, and X. Chen, "A multi-resolution dynamic model for face aging simulation," in Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on, 2007, pp. 1-8.
J. Suo, S.-C. Zhu, S. Shan, and X. Chen, "A compositional and dynamic model for face aging," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 3, pp. 385-401, 2010.
Jagtap et al., Human Age Classification Using facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128 (Year: 2016) (Year: 2016). *
Jagtap et al., Human Age Classification Using facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128 (Year: 2016). *
Jagtap et al., Human Age Classification Using Facial Skin Aging Features and Artificial Neural Network, Cognitive Systems Research vol. 40 (2016), pp. 116-128.
K. Scherbaum, M. Sunkel, H.-P. Seidel, and V. Blanz, "Prediction of Individual Non-Linear Aging Trajectories of Faces," in Computer Graphics Forum, 2007, vol. 26, pp. 285-294.
K. Sveikata, I. Balciuniene, and J. Tutkuviene, "Factors influencing face aging. Literature review," Stomatologija, vol. 13, No. 4, pp. 113-115, 2011.
K. Ueki, T. Hayashida, and T. Kobayashi, "Subspace-based age-group classification using facial images under various lighting conditions," in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on, 2006, p. 6-pp.
Konig et al., A New Context: Screen to Face Distance, 8th International Symposium on Medical Information and Communication Technology (ISMICT), IEEE, Apr. 2, 2014, pp. 1-5.
Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, part of Advances in Neural Information Processing Systems 25 (NIPS 2012), 9 pages.
L. Boissieux, G. Kiss, N. M. Thalmann, and P. Kalra, Simulation of skin aging and wrinkles with cosmetics insight. Springer, 2000.
Levi et al., Age and Gender Classification Using Convolutional Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 34-42.
M. J. Jones and T. Poggio, "Multidimensional morphable models," in Computer Vision, 1998. Sixth International Conference on, 1998, pp. 683-688.
M. R. Gandhi, "A method for automatic synthesis of aged human facial images," McGill University, 2004.
Mathias et al., Face Detection Without Bells and Whistles, European Conference on Computer Vision, 2014, pp. 720-735.
N. Ramanathan and R. Chellappa, "Modeling age progression in young faces," in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 2006, vol. 1, pp. 387-394.
N. Ramanathan and R. Chellappa, "Modeling shape and textural variations in aging faces," in Automatic Face & Gesture Recognition, 2008. FG'08. 8th IEEE International Conference on, 2008, pp. 1-8.
N. Ramanathan, R. Chellappa, and S. Biswas, "Computational methods for modeling facial aging: A survey," Journal of Visual Languages & Computing, vol. 20, No. 3, pp. 131-144, 2009.
Ojima et al., Application of Image-Based Skin Chromophore Analysis to Cosmetics, Journal of Imaging Science and Technology, vol. 48, No. 3, May 2004, pp. 222-226.
P. A. George and G. J. Hole, "Factors influencing the accuracy of age estimates of unfamiliar faces," Perception-London-, vol. 24, pp. 1059-1059, 1995.
P. A. George and G. J. Hole, "Factors influencing the accuracy of age estimates of unfamiliar faces," Perception—London-, vol. 24, pp. 1059-1059, 1995.
P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, "Eigenfaces vs. fisherfaces: Recognition using class specific linear projection," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, No. 7, pp. 711-720, 1997.
S. R. Coleman and R. Grover, "The anatomy of the aging face: volume loss and changes in 3-dimensional topography," Aesthetic surgery journal, vol. 26, No. 1 suppl, pp. S4-S9, 2006.
Sun et al., Statistical Characterization of Face Spectral Reflectances and Its Application to Human Portraiture Spectral Estimation, Journal of Imaging Science and Technology, vol. 46, No. 6, 2002, pp. 498-506.
Sung Eun Choi et al., Age face simulation using aging functions on global and local features with residual images, Expert Systems with Applications, vol. 80, Mar. 7, 2017, pp. 107-125.
T. F. Cootes, G. J. Edwards, and C. J. Taylor, "Active appearance models," IEEE Transactions on pattern analysis and machine intelligence, vol. 23, No. 6, pp. 681-685, 2001.
T. J. Hutton, B. F. Buxton, P. Hammond, and H. W. Potts, "Estimating average growth trajectories in shape-space using kernel smoothing," Medical Imaging, IEEE Transactions on, vol. 22, No. 6, pp. 747-753, 2003.
Tsumura et al., Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin, ACM Transactions on Graphics (TOG), vol. 22, Issue 3, Jul. 2003, pp. 770-779.
U. Park, Y. Tong, and A. K. Jain, "Age-invariant face recognition," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 5, pp. 947-954, 2010.
U. Park, Y. Tong, and A. K. Jain, "Face recognition with temporal invariance: A 3d aging model," in Automatic Face & Gesture Recognition, 2008. FG'08. 8th IEEE International Conference on, 2008, pp. 1-7.
U.S. Appl. No. 62/547,196, filed Aug. 18, 2017, Ankur (NMN) Purwar.
V. Blanz and T. Vetter, "A morphable model for the synthesis of 3D faces," in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999, pp. 187-194.
Viola et al., Robust Real-Time Face Detection, International Journal of Computer Vision 57(2), 2004, pp. 137-154.
Wang et al., Combining Tensor Space Analysis and Active Appearance Models for Aging Effect Simulation on Face Images, IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 42, No. 4, Aug. 1, 2012, pp. 1107-1118.
Wang et al., Combining Tensor Space Analysis and Active Appearance Models for Aging Effect Simulation on Face Images, IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 42, No. 4, Aug. 1, 2012, pp. 1107-1118.
Wang et al., Deeply-Learned Feature for Age Estimation, 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 534-541.
Wu et al., Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness, Neurocomputing 221 (2017), pp. 138-145.
X. Geng, Z.-H. Zhou, and K. Smith-Miles, "Automatic age estimation based on facial aging patterns," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, No. 12, pp. 2234-2240, 2007.
X. Geng, Z.-H. Zhou, Y. Zhang, G. Li, and H. Dai, "Learning from facial aging patterns for automatic age estimation," in Proceedings of the 14th annual ACM international conference on Multimedia, 2006, pp. 307-316.
Xiangbo Shu et al., Age progression: Current technologies and applications, Neurocomputing, vol. 208, Oct. 1, 2016, pp. 249-261.
Y. Bando, T. Kuratate, and T. Nishita, "A simple method for modeling wrinkles on human skin," in Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on, 2002, pp. 166-175.
Y. Fu and N. Zheng, "M-face: An appearance-based photorealistic model for multiple facial attributes rendering," Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, No. 7, pp. 830-842, 2006.
Y. Fu and T. S. Huang, "Human age estimation with regression on discriminative aging manifold," Multimedia, IEEE Transactions on, vol. 10, No. 4, pp. 578-584, 2008.
Y. Fu, G. Guo, and T. S. Huang, "Age synthesis and estimation via faces: A survey," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, No. 11, pp. 1955-1976, 2010.
Y. H. Kwon and N. da Vitoria Lobo, "Age classification from facial images," in Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on, 1994, pp. 762-767.
Y. Wu, P. Kalra, and N. M. Thalmann, "Simulation of static and dynamic wrinkles of skin," in Computer Animation'96. Proceedings, 1996, pp. 90-97.
Yi et al., Age Estimation by Multi-scale Convolutional Network, Computer Vision-ACCV 2014, Nov. 1, 2014, pp. 144-158, 2015.
Yi et al., Age Estimation by Multi-scale Convolutional Network, Computer Vision—ACCV 2014, Nov. 1, 2014, pp. 144-158, 2015.
Yun Fu et al., Age Synthesis and Estimation via Faces: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, No. 11, Nov. 1, 2010, pp. 1955-1976.
Z. Liu, Z. Zhang, and Y. Shan, "Image-based surface detail transfer," Computer Graphics and Applications, IEEE, vol. 24, No. 3, pp. 30-35, 2004.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11810277B2 (en) * 2018-07-20 2023-11-07 Huawei Technologies Co., Ltd. Image acquisition method, apparatus, and terminal
US11553872B2 (en) * 2018-12-04 2023-01-17 L'oreal Automatic image-based skin diagnostics using deep learning
US11832958B2 (en) 2018-12-04 2023-12-05 L'oreal Automatic image-based skin diagnostics using deep learning
US20230196835A1 (en) * 2021-12-16 2023-06-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining dark eye circles
WO2023114594A1 (en) * 2021-12-16 2023-06-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining skin oiliness
US12230062B2 (en) * 2021-12-16 2025-02-18 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining dark eye circles

Also Published As

Publication number Publication date
JP6849825B2 (en) 2021-03-31
EP3631679A1 (en) 2020-04-08
JP2020522810A (en) 2020-07-30
KR20200003402A (en) 2020-01-09
CN110709856A (en) 2020-01-17
US20180350071A1 (en) 2018-12-06
KR102297301B1 (en) 2021-09-06
WO2018222808A1 (en) 2018-12-06
EP3631679B1 (en) 2023-09-13
CN110709856B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
US10818007B2 (en) 2020-10-27 Systems and methods for determining apparent skin age
US11055762B2 (en) 2021-07-06 Systems and methods for providing customized product recommendations
US10574883B2 (en) 2020-02-25 System and method for guiding a user to take a selfie
US11832958B2 (en) 2023-12-05 Automatic image-based skin diagnostics using deep learning
US11416988B2 (en) 2022-08-16 Apparatus and method for visualizing visually imperceivable cosmetic skin attributes
EP2174296B1 (en) 2019-07-24 Method and apparatus for realistic simulation of wrinkle aging and de-aging
CN101652784B (en) 2013-05-22 Methods for Simulating Facial Skin Aging and Deaging
US20180276869A1 (en) 2018-09-27 Methods For Age Appearance Simulation
KR102495889B1 (en) 2023-02-06 Method for detecting facial wrinkles using deep learning-based wrinkle detection model trained according to semi-automatic labeling and apparatus for the same
EP3794541A1 (en) 2021-03-24 Systems and methods for hair analysis
US20240265533A1 (en) 2024-08-08 Computer-based body part analysis methods and systems

Legal Events

Date Code Title Description
2018-05-31 FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

2018-07-09 STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

2019-11-27 AS Assignment

Owner name: PALO ALTO RESEARCH CENTER INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHREVE, MATTHEW ADAM;WU, WENCHENG NMN;XU, BEILEI NMN;SIGNING DATES FROM 20180601 TO 20180626;REEL/FRAME:051126/0928

Owner name: THE PROCTER & GAMBLE COMPANY, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PURWAR, ANKUR NMN;MATTS, PAUL JONATHAN;SIGNING DATES FROM 20180629 TO 20190717;REEL/FRAME:051126/0835

2020-01-21 STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

2020-04-26 STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

2020-09-08 STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

2020-10-07 STCF Information on status: patent grant

Free format text: PATENTED CASE

2023-06-20 AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PALO ALTO RESEARCH CENTER INCORPORATED;REEL/FRAME:064038/0001

Effective date: 20230416

2023-06-22 AS Assignment

Owner name: CITIBANK, N.A., AS COLLATERAL AGENT, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:064760/0389

Effective date: 20230621

2023-06-28 AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVAL OF US PATENTS 9356603, 10026651, 10626048 AND INCLUSION OF US PATENT 7167871 PREVIOUSLY RECORDED ON REEL 064038 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:PALO ALTO RESEARCH CENTER INCORPORATED;REEL/FRAME:064161/0001

Effective date: 20230416

2023-11-20 AS Assignment

Owner name: JEFFERIES FINANCE LLC, AS COLLATERAL AGENT, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:065628/0019

Effective date: 20231117

2024-02-13 AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS RECORDED AT RF 064760/0389;ASSIGNOR:CITIBANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:068261/0001

Effective date: 20240206

Owner name: CITIBANK, N.A., AS COLLATERAL AGENT, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:066741/0001

Effective date: 20240206

2024-04-10 MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4