Systems and methods of robotic application of cosmetics
09811717 ยท 2017-11-07
Inventors
Cpc classification
B25J9/1664
PERFORMING OPERATIONS; TRANSPORTING
G06V10/145
PHYSICS
Y10S901/09
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
A45D2044/007
HUMAN NECESSITIES
G01B11/2545
PHYSICS
B25J9/1684
PERFORMING OPERATIONS; TRANSPORTING
B25J15/0019
PERFORMING OPERATIONS; TRANSPORTING
G06T7/521
PHYSICS
B25J9/0003
PERFORMING OPERATIONS; TRANSPORTING
B25J11/008
PERFORMING OPERATIONS; TRANSPORTING
A45D40/26
HUMAN NECESSITIES
G01B11/2513
PHYSICS
G06T1/0014
PHYSICS
A45D44/005
HUMAN NECESSITIES
B25J13/089
PERFORMING OPERATIONS; TRANSPORTING
Y10S901/47
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
A45D44/00
HUMAN NECESSITIES
H04N13/271
ELECTRICITY
International classification
B25J15/00
PERFORMING OPERATIONS; TRANSPORTING
B25J11/00
PERFORMING OPERATIONS; TRANSPORTING
B25J13/08
PERFORMING OPERATIONS; TRANSPORTING
A45D40/26
HUMAN NECESSITIES
B25J9/00
PERFORMING OPERATIONS; TRANSPORTING
H04N7/18
ELECTRICITY
G06T7/521
PHYSICS
A45D44/00
HUMAN NECESSITIES
A45D34/04
HUMAN NECESSITIES
Abstract
Systems and methods for applying cosmetics are provided using an area light projector shining light on the face, capturing the reflected light using a camera and using a depth processor, and communicating with the camera(s) and the projector(s) to generate a depth image output. A control device communicates with the depth processor to receive the output, to receive the face profiles and generate motion trajectory commands, and a robot communicates with the control device to receive the commands to apply the cosmetics to the face in accordance with the face profiles. Methods for applying the cosmetics include receiving a face profile, receiving a depth processor input representing a face, extracting face features, receiving an initial robot position or extracting a robot position from input, matching the face profile to the face features, and generating and outputting robot trajectory to the robot to apply the cosmetics.
Claims
1. A system for applying cosmetics to a face in accordance with face profiles, comprising: an area light projector configured to shine light over the entire face; a camera spaced from the face and the area light projector configured to capture reflected light from the face; a depth processor configured to communicate with the camera and the projector and to generate a depth image output; a control device configured to communicate with the depth processor to receive the depth image output, to receive a face profile, and to generate a plurality of motion trajectory commands; and a robot configured to communicate with the control device to receive the motion trajectory commands to apply the cosmetics to the face in accordance with the face profile.
2. The system of claim 1, further comprising a storage device configured to communicate with the control device to store the face profiles and a user interface configured to display face profile(s) that can be selected by the user as the face profile.
3. The system of claim 1, further comprising a network storage device is configured to permit the face profiles to be accessed and/or edited by third parties and users over the Internet.
4. The system of claim 1, further comprising a head rest for keeping the face stationary during the time the robot is applying the cosmetics to the face.
5. The system of claim 1, wherein each of the face profiles includes a 3D model that represents the face and a cosmetics design that assigns at least one color value to the 3D model.
6. The system of claim 1, further comprising enclosures and lighting to control the illumination on the face from ambient light to limit saturation of the camera.
7. The system of claim 1, wherein the robot includes an applicator comprising a nozzle, a brush, or a pen.
8. The system of claim 1, wherein the robot is configured to change the type of applicator to meet the requirements of the face profile.
9. The system of claim 1, further comprising displaying the cosmetics designs and selecting one of the cosmetics designs to apply the cosmetics.
10. The system of claim 1, wherein the control device is configured to construct a 3D model of the face from the depth image output, extract features of the face, match the features to the face profile, generate robot motion planning from the face profile, and output the motion planning to the robot to apply cosmetics on the face.
11. The system of claim 10, wherein the control device is further configured to operate at time intervals that will permit motion of the face without misapplication of the cosmetics on the face.
12. The system of claim 10, wherein the control device is further configured to extract the robot position at time intervals that will permit motion of the face without misapplication of the cosmetics on the face.
13. A method performed in a control device of controlling a robot that applies cosmetics to a face, comprising the steps of: (a) receiving a face profile; (b) receiving a depth sensor input representing area light reflected from the entire face; (c) extracting a plurality of features of the face; (d) matching the face profile of step (a) to the features of step (c); (e) generating robot trajectory based on step (d); and (f) outputting the robot trajectory to the robot to apply the cosmetics.
14. The method of claim 13, wherein each of the face profiles includes a 3D model that represents the face and a cosmetics design that assigns at least one color value to the 3D model.
15. The method of claim 13, further comprising displaying user selectable cosmetics designs to apply the cosmetics.
16. The method of claim 13, further comprising extracting a robot position with respect to the face from the depth sensor input.
17. The method of claim 16, further comprising repeating steps (b) to (f).
18. A method performed in a control device of displaying guides that applies cosmetics to a face by hand, comprising the steps of: (a) receiving a face profile; (b) receiving a depth sensor input representing area light reflected from the entire face; (c) extracting a plurality of features of the face; (d) matching the face profile of step (a) to the features of step (c); (e) generating guides to apply cosmetics based on step (d); and (f) outputting the guides on face to a display.
19. The method of claim 18, wherein each of the face profiles includes a 3D model that represents the face and a cosmetics design that assigns at least one color value to the 3D model.
20. The method of claim 18, further comprising displaying user selectable cosmetics applicators.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(10) The following description includes the best mode of carrying out the invention. The detailed description illustrates the principles of the invention and should not be taken in a limiting sense. The scope of the invention is determined by reference to the claims. Each part (or step) is assigned its own part (or step) number throughout the specification and drawings. The method drawings illustrate a specific sequence of steps, but the steps can be performed in parallel and/or in different sequence to achieve the same result.
(11)
(12) It is not essential to the invention what type of physical structure is used to support the cameras and incoherent light projectors. In the illustrated embodiment, a frame 18 with three panels is used. As shown, the frame 18 has a middle panel between left and right side panels. Each side panel forms an obtuse angle with the middle panel. The middle panel of the frame 18 supports cameras 16, 26 and a user interface 28. The left panel supports the cameras 12 and 32, and the projectors 14 and 30. The right panel supports another set of projector and camera 76 and 78 (See
(13) A robotic arm 20 communicating with a control device 42 (
(14)
(15) The processors used in the control device 42 are not essential to the invention and could be any general-purpose processor (e.g. Intel Xeon), a graphic processor (GPU). Each processor can read and write data to memory and/or through a link, e.g., Fibre Channel (FC), Serial ATA (SATA), Serial attached SCSI (SAS), Ethernet, Wi-Fi, to a local storage device 48 or to a network accessible storage device 50. The storage devices can be a hard disk drive, a disk array, and/or solid state disk. In an embodiment, the network storage device 50 permits the face profiles to be accessed and/or edited by face profile editor 51 over the Internet. The control device 42 controls and communicates with a robot 20, a user interface 28, and a structured light depth sensor 41 through USB, Wi-Fi, or Ethernet. The control device 42 executes an operating system such as Linux or Windows.
(16) In an embodiment, a suitable part for the control device 42 and the storage device 48 is Centaurus Rogue2 Gaming Computer manufactured by Centaurus Computer at 6450 Matlea Court, Charlotte, N.C. 28215. For additional details see www.centauruscomputers.com.
(17) The structured light depth sensor 41 includes a number of light projectors 36, a number of cameras 38 and a structured light depth processor 40.
(18)
(19) In an embodiment, a suitable part for the structured light depth processor 40 is the Zyng7000 FPGA manufactured by Xilinx, Inc. at 2100 Logic Drive, San Jose, Calif. 95124.
(20) In an embodiment, a suitable part for the incoherent light projector 36 is the HW8G3 Pico Engine, manufactured by Imagine Optix at 10030 Green Level Church Road, Suite 802-1260, Cary, N.C. 27519. For additional details see www.imagineoptix.com and http://www.picoprojector-info.com.
(21) In an embodiment, a suitable part for the camera 38 is the LI-VM01CM manufactured by Leopard Imaging Inc. at 1130 Cadillac Court, Milpitas, Calif. 95035.
(22) In an embodiment, a suitable part for the robot 20 is the Lynxmotion model AL5D 4DOF robotic arm manufactured by Lynxmotion, a RobotShop Inc. company, at 555 VT Route 78 Suite 367, Swanton, Vt. 05488. RobotShop Inc. is in Mirabel, Quebec, Canada. For additional details see www.robotshop.com.
(23) In an embodiment, a suitable part for the user interface 28 is a touch display such as ME176CX manufactured by ASUS at 800 Corporate Way, Fremont, Calif. 94539. Additional details about implementation of touch screens are described in Wikipedia Touchscreen (2015), incorporated by reference herein.
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31) In another embodiment, the depth information can be obtained using stereo imaging principle. The distance between face of the head 10 and cameras 12, 16, and 78 is determined by triangulation between any two of the cameras 12, 16 and 78. This type of depth sensing is disclosed by the U.S. Pat. No. 6,915,008 B2 to Barman et al., which is incorporated by reference herein.
(32) In another embodiment, the cameras 12, 16 and 78 employ the time of flight (ToF) principle, where the distance between the face of the head 10 and the cameras 12, 16 and 78 is measured by how long it takes for the light to travel over the distance between the face and the cameras. The distance can be computed in the cameras 12, 16, and 78 from the measured time of flight in combination with the speed of light. This type of depth sensing using time of flight is disclosed by the U.S. Patent Application No. US 2006/0000967 A1 to Kuijk et al., which is incorporated by reference herein. A suitable ToF camera and the area light projector is TARO 3DRanger from Heptagon Micro Optics at 26 Woodlands Loop, Singapore 738317. The device, TARO 3DRanger, contains both the area light projector and ToF camera.
(33) In summary, the following table lists various projector and camera combinations in the embodiments illustrated in
(34) TABLE-US-00001 Embodiments Area Light Projectors Cameras 1 Incoherent light area projector Structured light camera 2 Coherent light area projector Structured light camera 3 Incoherent light area projector Stereo camera 4 Coherent light area projector Stereo camera 5 Incoherent light area projector Time of flight camera 6 Coherent light area projector Time of flight camera
(35)