H04N13/271

Method and apparatus for processing three-dimensional (3D) image

A method for processing a three-dimensional (3D) image includes acquiring a frame of a color image and a frame of a depth image, and generating a frame by combining the acquired frame of the color image with the acquired frame of the depth image. The generating of the frame includes combining a line of the color image with a corresponding line of the depth image.

CMOS image sensor for RGB imaging and depth measurement with laser sheet scan

An imaging unit includes a light source and a pixel array. The light source projects a line of light that is scanned in a first direction across a field of view of the light source. The line of light oriented in a second direction that is substantially perpendicular to the first direction. The pixel array is arranged in at least one row of pixels that extends in a direction that is substantially parallel to the second direction. At least one pixel in a row is capable of generating two-dimensional color information of an object in the field of view based on a first light reflected from the object and is capable of generating three-dimensional (3D) depth information of the object based on the line of light reflecting from the object. The 3D-depth information includes time-of-flight information.

Handheld three-dimensional coordinate measuring device operatively coupled to a mobile computing device

A handheld device has a projector that projects a pattern of light onto an object, a first camera that captures the projected pattern of light in first images, a second camera that captures the projected pattern of light in second images, a registration camera that captures a succession of third images, one or more processors that determines three-dimensional (3D) coordinates of points on the object based at least in part on the projected pattern, the first images, and the second images, the one or more processors being further operable to register the determined 3D coordinates based at least in part on common features extracted from the succession of third images, and a mobile computing device operably connected to the handheld device and cooperating with the one or more processors, the mobile computing device operable to display the registered 3D coordinates of points.

Damage detection from multi-view visual data

A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.

Damage detection from multi-view visual data

A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.

Multi-perspective display driver

Described examples include an integrated circuit having depth fusion engine circuitry configured to receive stereoscopic image data and, in response to the received stereoscopic image data, generate at least: first and second focal perspective images for viewing by a first eye at multiple focal distances; and third and fourth focal perspective images for viewing by a second eye at multiple focal distances. The integrated circuit further includes display driver circuitry coupled to the depth fusion engine circuitry and configured to drive a display device for displaying at least the first, second, third and fourth focal perspective images.

CALIBRATION OF DEPTH-SENSING COMPUTER VISION SYSTEMS
20220141445 · 2022-05-05 ·

Systems and methods utilize one or more 3D cameras (e.g., ToF cameras) in industrial safety applications. The 3D camera generates a depth map that may be used by external hardware and software to classify objects in a workcell and generate control signals for machinery. To facilitate sensor-specific calibration and coordination among sensors in a workcell, the sensors may store calibration data in a boot file that is loaded upon start-up. During initialization, the calibration data is loaded and, as the sensor operates, corrections are made to sensed data (e.g., pixel depth values) using the calibration data.

CALIBRATION OF DEPTH-SENSING COMPUTER VISION SYSTEMS
20220141445 · 2022-05-05 ·

Systems and methods utilize one or more 3D cameras (e.g., ToF cameras) in industrial safety applications. The 3D camera generates a depth map that may be used by external hardware and software to classify objects in a workcell and generate control signals for machinery. To facilitate sensor-specific calibration and coordination among sensors in a workcell, the sensors may store calibration data in a boot file that is loaded upon start-up. During initialization, the calibration data is loaded and, as the sensor operates, corrections are made to sensed data (e.g., pixel depth values) using the calibration data.

Wide viewing angle stereo camera apparatus and depth image processing method using the same
11729367 · 2023-08-15 · ·

Disclosed are a wide viewing angle stereo camera apparatus and a depth image processing method using the same. A stereo camera apparatus includes a receiver configured to receive a first image and a second image of a subject captured through a first lens and a second lens that are provided in a vertical direction; a converter configured to convert the received first image and second image using a map projection scheme; and a processing configured to extract a depth of the subject by performing stereo matching on the first image and the second image converted using the map projection scheme, in a height direction.

Wide viewing angle stereo camera apparatus and depth image processing method using the same
11729367 · 2023-08-15 · ·

Disclosed are a wide viewing angle stereo camera apparatus and a depth image processing method using the same. A stereo camera apparatus includes a receiver configured to receive a first image and a second image of a subject captured through a first lens and a second lens that are provided in a vertical direction; a converter configured to convert the received first image and second image using a map projection scheme; and a processing configured to extract a depth of the subject by performing stereo matching on the first image and the second image converted using the map projection scheme, in a height direction.