Patent classifications
H04N13/271
METHOD FOR PREDICTING DEPTH MAP CODING DISTORTION OF TWO-DIMENSIONAL FREE VIEWPOINT VIDEO
Disclosed is a method for predicting depth map coding distortion of a two-dimensional free viewpoint video, including: inputting sequences of texture maps and depth maps of two or more viewpoint stereoscopic videos; synthesizing a texture map of a first intermediate viewpoint of a current to-be-coded viewpoint and a first adjacent viewpoint, and synthesizing a texture map of a second intermediate viewpoint of the current to-be-coded viewpoint and a second adjacent viewpoint by using a view synthesis algorithm; recording a synthetic characteristic of each pixel according to the texture map and generating a distortion prediction weight; and calculating to obtain total distortion according to the synthetic characteristic and the distortion prediction weight.
OPTICAL PATTERN PROJECTION
This disclosure describes optical systems for projecting an irregular or complex pattern onto a region of space (e.g., a two-dimensional or three-dimensional object or scene). A respective light beam is emitted from each of a plurality of light sources. The emitted light beams collectively are diffracted in accordance with a plurality of different first grating parameters to produce a plurality of first diffracted light beams. The first diffracted light beams then collectively are diffracted in accordance with one or more second grating parameters.
CAMERA
A camera and associated method of operation, the camera comprising a plurality of sensor systems, each sensor system comprising at least one spatial sensor and at least one image sensor, wherein at least part of a field of view of one or more or each of the sensor systems differs to at least part of the field of view of at least one or each other of the sensor systems.
MEASUREMENT TOOL, CALIBRATION METHOD, CALIBRATION APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM
A measurement tool includes a first member and a measurement member. The first member has a surface including a chart to be used for calibration of a stereo camera. The measurement member is arranged on the surface of the first member. The measurement member includes a light source and a second member. The light source is configured to emit light with a uniform intensity regardless of a position on the surface. The second member is configured to cover the light source and radiate the light from a plurality of first holes and a plurality of second holes having a size larger than the first hole.
DISPARITY SEARCH RANGE COMPRESSION
Techniques related to disparity search range compression are discussed. Such techniques may include determining a combination of disparity search values that do not coexist in any search range of multiple search ranges each associated with pixels of an initial disparity map, compressing the combination of disparity values to a single disparity label, and performing a disparity estimation based on a disparity search label set including the single disparity label.
METHOD AND APPARATUS FOR AUTOMATIC CALIBRATION OF RGBZ SENSORS UTILIZING EPIPOLAR GEOMETRY AND SCANNING BEAM PROJECTOR
Using one or more patterned markers inside the projector module of a three-dimensional (3D) camera to facilitate automatic calibration of the camera's depth sensing operation. The 3D camera utilizes epipolar geometry-based imaging in conjunction with laser beam point-scans in a triangulation-based approach to depth measurements. A light-sensing element and one or more reflective markers inside the projector module facilitate periodic self-calibration of camera's depth sensing operation. To calibrate the camera, the markers are point-scanned using the laser beam and the reflected light is sensed using the light-sensing element. Based on the output of the light-sensing element, the laser's turn-on delay is adjusted to perfectly align a laser light spot with the corresponding reflective marker. Using reflective markers, the exact direction and speed of the scanning beam over time can be determined as well. The marker-based automatic calibration can periodically run in the background without interfering with the normal camera operation.
SYSTEM AND METHOD OF THREE-DIMENSIONAL SCANNING FOR CUSTOMIZING FOOTWEAR
A method for generating shoe recommendations includes: capturing, by a scanning system, a plurality of depth maps of a foot, the depth maps corresponding to different views of the foot; generating, by a processor, a 3D model of the foot from the plurality of depth maps; computing, by the processor, one or more measurements from the 3D model of the foot; computing, by the processor, one or more shoe parameters based on the one or more measurements; computing, by the processor, a shoe recommendation based on the one or more shoe parameters; and outputting, by the processor, the shoe recommendation
Resolving Three Dimensional Spatial Information using Time-shared Structured Lighting that Embeds Digital Communication
Systems, methods, and computer readable media to resolve three dimensional spatial information of cameras used to construct 3D images. Various embodiments perform communication synchronization between a first image capture system and one or more other image capture systems and generate a first flash pulse that projects a light pattern into an environment. An image is captured that includes the light pattern and a modulated optical signal encoded with an identifier of one of the first image capture system and related-camera information. A second flash from another image capture systems may flash at a second time based on the communication synchronization. During the second flash, the first image capture system captures a second image of the environment. Based on the first and second images, the first image capture system determines the orientation of the second image capture system relative to the first image capture system.
REDUCING POWER CONSUMPTION FOR TIME-OF-FLIGHT DEPTH IMAGING
Aspects of the embodiments are directed to passive depth determination. Initially, a high power depth map of a scene can be created. An object in the scene can be identified, such as a rigid body or other object or portion of an object. A series of lower power or RGB images can be captured. The object can be located in one or more of the lower power or RGB images. A change in the position of an object, represented by a set of pixels, can be determined. From the change in position of the object, a new depth of the object can be extrapolated. The extrapolated depth of the object can be used to update the high power depth map.
Adaptive structured light patterns
A method of depth map optimization using an adaptive structured light pattern is provided that includes capturing, by a camera in a structured light imaging device, a first image of a scene into which a pre-determined structured light pattern is projected by a projector in the structured light imaging device, generating a first disparity map based on the captured first image and the structured light pattern, adapting the structured light pattern based on the first disparity map to generate an adaptive pattern, wherein at least one region of the structured light pattern is replaced by a different pattern, capturing, by the camera, a second image of the scene into which the adaptive pattern is projected by the projector, generating a second disparity map based on the captured second image and the adaptive pattern, and generating a depth image using the second disparity map.