H04N13/271

Dynamic vision sensor and projector for depth imaging
11330247 · 2022-05-10 · ·

Systems, devices, and techniques related to matching features between a dynamic vision sensor and one or both of a dynamic projector or another dynamic vision sensor are discussed. Such techniques include casting a light pattern with projected features having differing temporal characteristics onto a scene and determining the correspondence(s) based on matching changes in detected luminance and temporal characteristics of the projected features.

Imaging system configured to use time-of-flight imaging and stereo imaging
11330246 · 2022-05-10 · ·

An imaging system is configured to use an array of time-of-flight (ToF) pixels to determine depth information using the ToF imaging method and/or the stereo imaging method. A light emitting component emits light to illuminate a scene and a light detecting component detects reflected light via the array of ToF pixels. A ToF pixel is configured to determine phase shift data based on a phase shift between the emitted light and the reflected light, as well as intensity data based on an amplitude of the reflected light. Multiple ToF pixels are shared by a single micro-lens. This enables multiple offset images to be generated using the intensity data measured by each ToF pixel. Accordingly, via a configuration in which multiple ToF pixels share a single micro-lens, depth information can be determined using both the ToF imaging method and the stereo imaging method.

Structured light projection optical system for obtaining 3D data of object surface
11326874 · 2022-05-10 · ·

A structured light projection optical system for obtaining 3D data of an object surface includes a structured light projection optical part configured to project a plurality of patterns onto an object or a screen, and an imaging optical part configured to obtain 3D data by photographing the patterns being projected from the structured light projection optical part. The structured light projection optical part includes a plurality of light sources, and a plurality of pattern masks. As the plurality of light sources are turned on and off, the pattern mask matches any one of the plurality of light sources illuminating a light, and the plurality of patterns are projected on the object or the screen by the pattern mask. Accordingly, various patterns can be effectively projected, real-time measurement can be easily performed through a quick pattern change, and the accurate 3D data can be obtained.

NON-MECHANICAL BEAM STEERING ASSEMBLY
20220141447 · 2022-05-05 ·

A depth camera assembly (DCA) for depth sensing of a local area. The DCA includes a transmitter, a receiver, and a controller. The transmitter illuminates a local area with outgoing light in accordance with emission instructions. The transmitter includes a fine steering element and a coarse steering element. The fine steering element deflects one or more optical beams at a first deflection angle to generate one or more first order deflected scanning beams. The coarse steering element deflects the one or more first order deflected scanning beams at a second deflection angle to generate the outgoing light projected into the local area. The receiver captures one or more images of the local area including portions of the outgoing light reflected from the local area. The controller determines depth information for one or more objects in the local area based in part on the captured one or more images.

NON-MECHANICAL BEAM STEERING ASSEMBLY
20220141447 · 2022-05-05 ·

A depth camera assembly (DCA) for depth sensing of a local area. The DCA includes a transmitter, a receiver, and a controller. The transmitter illuminates a local area with outgoing light in accordance with emission instructions. The transmitter includes a fine steering element and a coarse steering element. The fine steering element deflects one or more optical beams at a first deflection angle to generate one or more first order deflected scanning beams. The coarse steering element deflects the one or more first order deflected scanning beams at a second deflection angle to generate the outgoing light projected into the local area. The receiver captures one or more images of the local area including portions of the outgoing light reflected from the local area. The controller determines depth information for one or more objects in the local area based in part on the captured one or more images.

Systems and methods for encoding image files containing depth maps stored as metadata

Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.

Systems and methods for encoding image files containing depth maps stored as metadata

Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.

Method for Operating a 3D Body Scanner

A method for operating a 3D body scanner, which generates a 3D body model, includes the steps of measuring an installation geometry of the 3D body scanner, calculating an optimal scanning distance between a person to be scanned and the 3D body scanner by a control unit on the basis of at least the geometry of the installation of the 3D body scanner and projecting an optical indicator for the person to be scanned on a floor in front of the 3D body scanner at the optimal scanning distance from the 3D body scanner by a position indicator projector. A 3D body scanner is configured for performing the method for operating the 3D body scanner for generating a 3D body model.

Method for Operating a 3D Body Scanner

A method for operating a 3D body scanner, which generates a 3D body model, includes the steps of measuring an installation geometry of the 3D body scanner, calculating an optimal scanning distance between a person to be scanned and the 3D body scanner by a control unit on the basis of at least the geometry of the installation of the 3D body scanner and projecting an optical indicator for the person to be scanned on a floor in front of the 3D body scanner at the optimal scanning distance from the 3D body scanner by a position indicator projector. A 3D body scanner is configured for performing the method for operating the 3D body scanner for generating a 3D body model.

Method and apparatus for processing three-dimensional (3D) image

A method for processing a three-dimensional (3D) image includes acquiring a frame of a color image and a frame of a depth image, and generating a frame by combining the acquired frame of the color image with the acquired frame of the depth image. The generating of the frame includes combining a line of the color image with a corresponding line of the depth image.