Patent classifications
H04N13/271
Depth of field adjustment in images based on time of flight depth maps
An image capturing apparatus and a method for depth of filed (DOF) adjustment in images based on time of flight (TOF) depth maps is provided. The image capturing apparatus includes an image sensor and circuitry. The circuitry generates a TOF depth map of a scene that includes a plurality of objects. The TOF depth map includes information associated with distances between the image sensor and surfaces of the plurality of objects. The circuitry divides the TOF depth map into a plurality of regions that corresponds to at least one object of the plurality of objects. The circuitry determines a region of interest from the plurality of regions and adjusts the DOF of the at least one object associated with the determined region of interest. The circuitry further controls the image sensor to capture an image of the scene based on the adjusted DOF.
Depth of field adjustment in images based on time of flight depth maps
An image capturing apparatus and a method for depth of filed (DOF) adjustment in images based on time of flight (TOF) depth maps is provided. The image capturing apparatus includes an image sensor and circuitry. The circuitry generates a TOF depth map of a scene that includes a plurality of objects. The TOF depth map includes information associated with distances between the image sensor and surfaces of the plurality of objects. The circuitry divides the TOF depth map into a plurality of regions that corresponds to at least one object of the plurality of objects. The circuitry determines a region of interest from the plurality of regions and adjusts the DOF of the at least one object associated with the determined region of interest. The circuitry further controls the image sensor to capture an image of the scene based on the adjusted DOF.
Device case including a projector
One disclosed example provides a method for displaying a hologram via a head-mounted display (HMD) device. The method comprises, via a camera system on the HMD device, acquiring image data capturing a surrounding environment by detecting illumination light output by a projector located on a case for the HMD device. A distance is determined from the HMD device to an object in the surrounding environment based upon the image data. The method further comprises displaying via the HMD device a hologram, the hologram comprising a left-eye image and a right-eye image each having a perspective based upon the distance determined.
SYSTEM AND METHOD FOR 3D SCENE RECONSTRUCTION WITH DUAL COMPLEMENTARY PATTERN ILLUMINATION
An apparatus, system and process for utilizing dual complementary pattern illumination of a scene when performing depth reconstruction of the scene are described. The method may include projecting a first reference image and a complementary second reference image on a scene, and capturing first image data and second image data including the first reference image and the complementary second reference image on the scene. The method may also include identifying features of the first reference image from features of the complementary second reference image. Furthermore, the method may include performing three dimensional (3D) scene reconstruction for image data captured by the imaging device based on the identified features in the first reference image.
AUTOMATIC BODY MOVEMENT RECOGNITION AND ASSOCIATION SYSTEM
An automatic body movement recognition and association system that includes a preprocessing component and a “live testing” engine component. The system further includes a transition posture detector module and a recording module. The system uses three dimensional (3D) skeletal joint information from a stand-alone depth-sensing capture device that detects the body movements of a user. The transition posture detector module detects the occurrence of a transition posture and the recording module stores a segment of body movement data between occurrences of the transition posture. The preprocessing component processes the segments into a preprocessed movement that is used by a classifier component in the engine component to produce text or speech associated with the preprocessed movement. An “off-line” training system that includes a preprocessing component, a training data set, and a learning system also processes 3D information, off-line from the training data set or from the depth-sensing camera, to continually update the training data set and improve a learning system that sends updated information to the classifier component in the engine component when the updated information is shown to improve accuracy.
PHOTOGRAPHING DEVICE AND VEHICLE
A photographing device includes a first image sensor, a first filter area, a second image sensor, a first distance calculating unit, and a second distance calculating unit. The first image sensor includes a first sensor receiving light of a first wavelength band and outputting a target image, and a second sensor receiving light of a second wavelength band and outputting a reference image. The first filter area transmits a first light of a third wavelength band, which includes at least part of the first wavelength band, the first light being a part of light incident on the first image sensor. The second image sensor outputs a first image. The first distance calculating unit calculates a first distance to an object captured in the target image and the reference image. The second distance calculating unit calculates a second distance to an object captured in the reference image and the first image.
MOBILE TERMINAL AND CONTROL METHOD THEREFOR
The present invention relates to a mobile terminal including a lighting unit, and a control method therefor. The mobile terminal according to one embodiment of the present invention comprises: a lighting unit having a plurality of light sources; a sensor unit for receiving light outputted from the lighting unit and reflected off a subject; and a control unit for synchronizing, through the sensor unit, areas at which lights outputted from the plurality of light sources are emitted, and controlling, on the basis of preset conditions, the lighting unit so that at least one of the plurality of light sources emits light.
ACTIVE DUAL PIXEL STEREO SYSTEM FOR DEPTH EXTRACTION
A miniaturized active dual pixel stereo system and method for close range depth extraction includes a projector adapted to project a locally distinct projected pattern onto an image of a scene and a dual pixel sensor including a dual pixel sensor array that generates respective displaced images of the scene. A three-dimensional image is generated from the displaced images of the scene by projecting the locally distinct projected pattern onto the image of the scene, capturing the respective displaced images of the scene using the dual pixel sensor, generating disparity images from the respective displaced images of the scene, determining depth to each pixel of the disparity images, and generating the three-dimensional image from the determined depth to each pixel. A three-dimensional image of a user's hands generated by the active dual pixel stereo system may be processed by gesture recognition software to provide an input to an electronic eyewear device.
ACTIVE DUAL PIXEL STEREO SYSTEM FOR DEPTH EXTRACTION
A miniaturized active dual pixel stereo system and method for close range depth extraction includes a projector adapted to project a locally distinct projected pattern onto an image of a scene and a dual pixel sensor including a dual pixel sensor array that generates respective displaced images of the scene. A three-dimensional image is generated from the displaced images of the scene by projecting the locally distinct projected pattern onto the image of the scene, capturing the respective displaced images of the scene using the dual pixel sensor, generating disparity images from the respective displaced images of the scene, determining depth to each pixel of the disparity images, and generating the three-dimensional image from the determined depth to each pixel. A three-dimensional image of a user's hands generated by the active dual pixel stereo system may be processed by gesture recognition software to provide an input to an electronic eyewear device.
Capturing and aligning panoramic image and depth data
This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.