H04N13/271

METHOD AND APPARATUS FOR DEPTH-MAP ESTIMATION OF A SCENE

The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.

Projection Apparatus, Collection Apparatus, and Three-dimensional Scanning System with Same
20220003541 · 2022-01-06 ·

The present application discloses a projection apparatus, a collection apparatus, and a three-dimensional scanning system with the same. The projection apparatus includes: a light emission portion, configured to emit multiple rays of preset light, wherein the multiple rays of preset light correspond to multiple preset wavebands, and the multiple preset wavebands are different from each other; and a light transmission portion, disposed on a transmission path of the rays of preset light, wherein the rays of preset light is transmitted via a preset pattern on the light transmission portion to generate target light projected to a target object in the form of color-coded fringes, and the light transmission portion transmits the rays of preset light corresponding to at least two different preset wavebands.

NON-RIGID STEREO VISION CAMERA SYSTEM

A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.

RENDERING AUGMENTED REALITY WITH OCCLUSION
20210352262 · 2021-11-11 ·

AR elements are occluded in video image frames. A depth map is determined for an image frame of a video received from a video capture device. An AR graphical element for overlaying over the image frame is received. An element distance for AR graphical elements relative to a position of a user of the video capture device (e.g., the geographic position of the video capture device) is also received. Based on the depth map for the image frame, a pixel distance is determined for each pixel in the image frame. The pixel distances of the pixels in the image frame are compared to the element distance, and in response to a pixel distance for a given pixel being less than the element distance, the pixel of the image frame is displayed rather than a corresponding pixel of the AR graphical element.

RENDERING AUGMENTED REALITY WITH OCCLUSION
20210352262 · 2021-11-11 ·

AR elements are occluded in video image frames. A depth map is determined for an image frame of a video received from a video capture device. An AR graphical element for overlaying over the image frame is received. An element distance for AR graphical elements relative to a position of a user of the video capture device (e.g., the geographic position of the video capture device) is also received. Based on the depth map for the image frame, a pixel distance is determined for each pixel in the image frame. The pixel distances of the pixels in the image frame are compared to the element distance, and in response to a pixel distance for a given pixel being less than the element distance, the pixel of the image frame is displayed rather than a corresponding pixel of the AR graphical element.

CAMERA DEVICE AND DEPTH INFORMATION EXTRACTION METHOD OF SAME
20220003873 · 2022-01-06 · ·

A camera device according to an embodiment of the present invention includes: an light output unit which outputs output light signals to be emitted to an object and includes a plurality of light sources arrayed according to predetermined rules; a lens unit which includes an infrared (IR) filter and at least one lens disposed on the IR filter, and focuses input light signals reflected from the object; an image sensor which generates electrical signals from the input light signals focused by the lens unit; an image processing unit which acquires depth information about the object by using phase differences or time differences between the output light signals and the input light signals received in the image sensor; and a control unit which controls the light output unit, the lens unit, the image sensor, and the image processing unit, wherein the plurality of light sources are divided into at least two light source groups, the control unit controls the output light signals to be output sequentially for each light source group, the image sensor includes at least two pixel groups divided for each of the light source groups, and the control unit controls the input light signals to be focused sequentially for each pixel group.

APPARATUS AND METHOD FOR GENERATING THREE-DIMENSIONAL IMAGE

The present invention can provide a depth image generation apparatus comprising: a light source for generating light to be emitted toward an object in order to solve an SNR problem caused by resolution degradation and an insufficient amount of received light, while not increasing a light-emitting amount when photographing a remote object; a first optical system for emitting, as a dot pattern, at the object, the light generated by the light source; an image sensor for receiving light reflected from the object and converting same into an electrical signal; an image processor for acquiring depth data through the electrical signal; and a control unit connected to the light source, the first optical system, the image sensor and the image processor, wherein the control unit controls the first optical system so as to scan the object by moving the dot pattern in a preset pattern.

APPARATUS AND METHOD FOR GENERATING THREE-DIMENSIONAL IMAGE

The present invention can provide a depth image generation apparatus comprising: a light source for generating light to be emitted toward an object in order to solve an SNR problem caused by resolution degradation and an insufficient amount of received light, while not increasing a light-emitting amount when photographing a remote object; a first optical system for emitting, as a dot pattern, at the object, the light generated by the light source; an image sensor for receiving light reflected from the object and converting same into an electrical signal; an image processor for acquiring depth data through the electrical signal; and a control unit connected to the light source, the first optical system, the image sensor and the image processor, wherein the control unit controls the first optical system so as to scan the object by moving the dot pattern in a preset pattern.

PASSIVE THREE-DIMENSIONAL IMAGE SENSING BASED ON REFERENTIAL IMAGE BLURRING
20210352263 · 2021-11-11 ·

Techniques are described for passive three-dimensional image sensing based on referential image blurring. For example, a filter mask is integrated with a lens assembly to provide one or more normal imaging bandpass (NIB) regions and one or more reference imaging bandpass (RIB) regions, the regions being optically distinguishable and corresponding to different focal lengths and/or different focal paths. As light rays from a scene object pass through the different regions of the filter mask, a sensor can detect first and second images responsive to those light rays focused through the NIB region and the RIB region, respectively (according to their respective focal lengths and/or respective focal paths). An amount of blurring between the images can be measured and correlated to an object distance for the scene object. Some embodiments project additional reference illumination to enhance blurring detection in the form of reference illumination flooding and/or spotted illumination.

PASSIVE THREE-DIMENSIONAL IMAGE SENSING BASED ON REFERENTIAL IMAGE BLURRING
20210352263 · 2021-11-11 ·

Techniques are described for passive three-dimensional image sensing based on referential image blurring. For example, a filter mask is integrated with a lens assembly to provide one or more normal imaging bandpass (NIB) regions and one or more reference imaging bandpass (RIB) regions, the regions being optically distinguishable and corresponding to different focal lengths and/or different focal paths. As light rays from a scene object pass through the different regions of the filter mask, a sensor can detect first and second images responsive to those light rays focused through the NIB region and the RIB region, respectively (according to their respective focal lengths and/or respective focal paths). An amount of blurring between the images can be measured and correlated to an object distance for the scene object. Some embodiments project additional reference illumination to enhance blurring detection in the form of reference illumination flooding and/or spotted illumination.