H04N13/271

Non-mechanical beam steering for depth sensing

A depth camera assembly (DCA) for depth sensing of a local area. The DCA includes a transmitter, a receiver, and a controller. The transmitter illuminates a local area with outgoing light in accordance with emission instructions. The transmitter includes a fine steering element and a coarse steering element. The fine steering element deflects one or more optical beams at a first deflection angle to generate one or more first order deflected scanning beams. The coarse steering element deflects the one or more first order deflected scanning beams at a second deflection angle to generate the outgoing light projected into the local area. The receiver captures one or more images of the local area including portions of the outgoing light reflected from the local area. The controller determines depth information for one or more objects in the local area based in part on the captured one or more images.

Non-mechanical beam steering for depth sensing

A depth camera assembly (DCA) for depth sensing of a local area. The DCA includes a transmitter, a receiver, and a controller. The transmitter illuminates a local area with outgoing light in accordance with emission instructions. The transmitter includes a fine steering element and a coarse steering element. The fine steering element deflects one or more optical beams at a first deflection angle to generate one or more first order deflected scanning beams. The coarse steering element deflects the one or more first order deflected scanning beams at a second deflection angle to generate the outgoing light projected into the local area. The receiver captures one or more images of the local area including portions of the outgoing light reflected from the local area. The controller determines depth information for one or more objects in the local area based in part on the captured one or more images.

Three-dimensional image sensor based on time of flight and electronic apparatus including the image sensor

A Time-of-Flight (ToF)-based three-dimensional (3D) image sensor includes at least two first photogates symmetrically arranged in a central portion of a pixel, at least two first gates configured to remove an overflow charge generated in the at least two first photogates, and a first gate group. The at least two first gates are arranged symmetrically in an outer portion of the pixel. The first gate group includes a plurality of gates configured to store and transmit charges generated in the at least two first photogates. The first gate group is arranged in the outer portion of the pixel.

Three-dimensional hand tracking using depth sequences
09811721 · 2017-11-07 · ·

In the field of Human-computer interaction (HCI), i.e., the study of the interfaces between people (i.e., users) and computers, understanding the intentions and desires of how the user wishes to interact with the computer is a very important problem. The ability to understand human gestures, and, in particular, hand gestures, as they relate to HCI, is a very important aspect in understanding the intentions and desires of the user in a wide variety of applications. In this disclosure, a novel system and method for three-dimensional hand tracking using depth sequences is described. Some of the major contributions of the hand tracking system described herein include: 1.) a robust hand detector that is invariant to scene background changes; 2.) a bi-directional tracking algorithm that prevents detected hands from always drifting closer to the front of the scene (i.e., forward along the z-axis of the scene); and 3.) various hand verification heuristics.

Time-of-flight image sensor and light source driver having simulated distance capability
09812486 · 2017-11-07 · ·

An apparatus is described that includes an image sensor and a light source driver circuit having configuration register space to receive information pertaining to a command to simulate a distance between a light source and an object that is different than an actual distance between the light source and the object.

Accounting for perspective effects in images
09813693 · 2017-11-07 · ·

Images captured at short distances, such as “selfies,” can be improved by addressing magnification and perspective effects present in the images. Distance information, such as a three-dimensional depth map, can be determined for an object using stereoscopic imaging or another distance measuring approach. Based on a magnification function determined for a camera lens, magnification levels for different regions of the captured images can be determined. At least some of these regions then can be adjusted or transformed in order to provide for a more consistent magnification levels across those regions, thereby reducing anamorphic effects. Where appropriate, gaps in the image can also be filled to enhance the image. At least some control over the amount of adjustment may be provided to users for aesthetic control.

METHOD FOR ALIGNMENT OF LOW-QUALITY NOISY DEPTH MAP TO THE HIGH-RESOLUTION COLOUR IMAGE
20170316602 · 2017-11-02 ·

Various embodiments are provided which relate to the field of image signal processing, specifically relating to the generation of a depth-view image of a scene from a set of input images of a scene taken at different cameras of a multi-view imaging system. A method comprises obtaining a frame of an image of a scene and a frame of a depth map regarding the frame of the image. A minimum depth and a maximum depth of the scene and a number of depth layers for the depth map are determined. Pixels of the image are projected to the depth layers to obtain projected pixels on the depth layers; and cost values for the projected pixels are determined. The cost values are filtered and a filtered cost value is selected from a layer to obtain a depth value of a pixel of an estimated depth map.

DRILLING RIG

A system comprising a drilling rig having a rig floor, a derrick, a master control computer system and at least one camera, the at least one camera capturing a master image of at least a portion of the rig floor, sending the master image to the master control computer, the master control computer system mapping said master image into a model to facilitate control of items on said drilling rig.

IMAGE PROJECTION
20170316594 · 2017-11-02 ·

According to one example for outputting image data, an image comprising a surface and an object are captured on a sensor. An object mask based on the captured image is created on a processor. A first composite image based on the object mask and a source content file is created. In an example, the first composite image is projected to the surface.

DEPTH MAP GENERATION BASED ON CLUSTER HIERARCHY AND MULTIPLE MULTIRESOLUTION CAMERA CLUSTERS
20170318280 · 2017-11-02 ·

Techniques for depth map generation using cluster hierarchy and multiple multiresolution camera clusters are described. In one example embodiment, the method includes, capturing images using multiple multiresolution camera clusters. Multiple low-resolution depth maps are then generated by down scaling the captured high resolution image and mid-resolution images to lower resolution images. A low-resolution central camera depth map is generated using the refined multiple low-resolution depth maps. Captured lower resolution images are then up-scaled to mid-resolution images. A mid-resolution depth maps are then generated for each cluster using multiple view points and the up-scaled mid-resolution images. A high-resolution depth map is then generated using the refined initial mid-resolution depth map, the low-resolution central camera depth map, and the up-scaled central cluster images. A 3D image of the captured image is then generated using the generated high-resolution depth map and the captured low, mid and high-resolution images.