H04N13/271

Active stereo depth prediction based on coarse matching

An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.

Caching and updating of dense 3D reconstruction data

A method to efficiently update and manage outputs of real time or offline 3D reconstruction and scanning in a mobile device having limited resource and connection to the Internet is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data, in either single user applications or multi-user applications sharing and updating the same 3D reconstruction data. The method includes a block-based 3D data representation that allows local update and maintains neighbor consistency at the same time, and a multi-layer caching mechanism that retrieves, prefetches, and stores 3D data efficiently for XR applications. Between sessions of an XR device, blocks may be persisted on the device or in remote storage in one or more cache layers. The device may, upon starting a new session, selectively use the blocks from one or more layers of the cache.

Multi-dimensional rendering
11589024 · 2023-02-21 · ·

A photo filter (e.g., multi-dimensional) light field effect system includes an eyewear device having a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the system to create an image in each of a least two dimensions and create a multi-dimensional light field effect image with an appearance of a spatial rotation or movement and transitional change, by blending together a left photo filter image and a right photo filter image in each dimension and blending the blended images from all dimensions.

Synchronous event driven readout of pixels in a detector for direct time-of-flight depth sensing

A depth camera assembly (DCA) includes a direct time of flight system for determining depth information for a local area. The DCA includes an illumination source, a camera, and a controller. The illumination source projects light (e.g., pulse of light) into the local area. The camera detects reflections of the projected light from objects in the local area. The camera includes a detector where pixels are grouped into multiple macropixels that are coupled to an output bus. Specific macropixels from which information describing light detected by pixels in the specific macropixels is obtained. In come configurations, each macropixel includes a counter that is incremented based on detection of light by pixels in the macropixels. The counter may be used to select macropixels from which data is obtained.

Method for sensing depth of object by considering external light and device implementing same

The present disclosure relates to a method for sensing the depth of an object by considering external light and a device implementing the same, and a method for sensing the depth of an object by considering external light according to an embodiment of the present disclosure comprises the steps of: storing, in a storage unit, first depth information of an object, which is sensed at a first time point by a depth camera unit of a depth sensing module; storing, in the storage unit, second depth information of the object, which is sensed at a second time point by the depth camera unit; comparing, by a sensing data filtering unit of the depth sensing module, the generated first and second depth information to identify a filtering target region from the second depth information; and adjusting, by a control unit of the depth sensing module, the depth value of the region filtered from the second depth information.

Image processing device, content processing device, content processing system, and image processing method

In a depth image compressing section of an image processing device, a depth image operation section generates a depth image by operation using photographed stereo images. A difference image obtaining section generates a difference image between an actually measured depth image and the computed depth image. In a depth image decompressing section of a content processing device, a depth image operation section generates a depth image by operation using the transmitted stereo images. A difference image adding section restores a depth image by adding the computed depth image to the transmitted difference image.

Image processing device, content processing device, content processing system, and image processing method

In a depth image compressing section of an image processing device, a depth image operation section generates a depth image by operation using photographed stereo images. A difference image obtaining section generates a difference image between an actually measured depth image and the computed depth image. In a depth image decompressing section of a content processing device, a depth image operation section generates a depth image by operation using the transmitted stereo images. A difference image adding section restores a depth image by adding the computed depth image to the transmitted difference image.

Surface characterisation apparatus and system

A system for characterising surfaces in a real-world scene, the system comprising an object identification unit operable to identify one or more objects within one or more captured images of the real-world scene, a characteristic identification unit operable to identify one or more characteristics of one or more surfaces of the identified objects, and an information generation unit operable to generate information linking an object and one or more surface characteristics associated with that object.

Hybrid imaging system for underwater robotic applications

Hybrid imaging system for 3D imaging of an underwater target, comprising: two optical image sensors for stereoscopic imaging; a switchable structured-light emitter having different wavelengths; a switchable spatially non-coherent light source; a data processor configured for alternating between operating modes which comprise: a first mode wherein the structured-light emitter is activated, the light source is deactivated and the image sensors are activated to capture reflected light from the structured-light emitter, and a second mode wherein the structured-light emitter is deactivated, the light source is activated and the image sensors are activated to capture reflected light from the light source; wherein the data processor is configured for delaying image sensor capture, on the activation of the structured-light emitter and on the activation of the light source, for a predetermined time such that light reflected from any point closer than the target is not captured.

Mobile terminal

A mobile terminal including a display configured to display a rotatable graphic interface; a Time of Flight (TOF) camera configured to obtain a depth image of an object; and a controller configured to control the TOF camera to enter a rotation detection mode based on the object included the depth image, obtain a relative rotation amount of a plurality of specific points of the object included in the depth image, and rotate the graphic interface on the display based on the obtained relative rotation amount.