H04N13/271

METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY UTILIZING CAMERA-VIEW DEPTH MAP AND IMAGE PROCESSING DEVICE USING THE SAME
20230086983 · 2023-03-23 · ·

A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto main virtual geometries and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) generating a final main 3D cost volume therefrom through a cost volume computation network, and (c) generating sub inverse distance indices corresponding to inverse values of sub separation distances between a sub reference point and sub virtual geometries, and main inverse distance indices corresponding to inverse values of main separation distances between a main reference point and the main virtual geometries, by using a sub cost volume and the final main 3D cost volume, to thereby acquire the distance to the object.

METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY UTILIZING CAMERA-VIEW DEPTH MAP AND IMAGE PROCESSING DEVICE USING THE SAME
20230086983 · 2023-03-23 · ·

A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto main virtual geometries and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) generating a final main 3D cost volume therefrom through a cost volume computation network, and (c) generating sub inverse distance indices corresponding to inverse values of sub separation distances between a sub reference point and sub virtual geometries, and main inverse distance indices corresponding to inverse values of main separation distances between a main reference point and the main virtual geometries, by using a sub cost volume and the final main 3D cost volume, to thereby acquire the distance to the object.

SYSTEMS AND METHODS FOR GROUND TRUTH GENERATION USING SINGLE PHOTON AVALANCHE DIODES

A system for single photon avalanche diode image capture is configurable to, over a frame capture time period, selectively activate an illuminator to alternately emit light from the illuminator and refrain from emitting light from the illuminator. The system is configurable to, over the frame capture time period, perform a plurality of sequential shutter operations to configure each SPAD pixel of the SPAD array to enable photon detection. The plurality of sequential shutter operations generates, for each SPAD pixel of the SPAD array, a plurality of binary counts indicating whether a photon was detected during each of the plurality of sequential shutter operations. The system is configurable to, based on a first set of binary counts of the plurality of binary counts, generate an ambient light image. The system is configurable to, based on a second set of binary counts of the plurality of binary counts, generate an illuminated image.

SYSTEMS AND METHODS FOR GROUND TRUTH GENERATION USING SINGLE PHOTON AVALANCHE DIODES

A system for single photon avalanche diode image capture is configurable to, over a frame capture time period, selectively activate an illuminator to alternately emit light from the illuminator and refrain from emitting light from the illuminator. The system is configurable to, over the frame capture time period, perform a plurality of sequential shutter operations to configure each SPAD pixel of the SPAD array to enable photon detection. The plurality of sequential shutter operations generates, for each SPAD pixel of the SPAD array, a plurality of binary counts indicating whether a photon was detected during each of the plurality of sequential shutter operations. The system is configurable to, based on a first set of binary counts of the plurality of binary counts, generate an ambient light image. The system is configurable to, based on a second set of binary counts of the plurality of binary counts, generate an illuminated image.

METHOD AND APPARATUS FOR PROCESSING MULTI-VIEW VIDEO, DEVICE AND STORAGE MEDIUM
20230086988 · 2023-03-23 ·

A computer device acquires multi-view video data that includes video data of multiple views. The computer device performs view group division on the multi-view video data based on the multiple views to obtain at least one view group. The computer device determines first spatial region information of the at least one view group. The first spatial region information includes information of a three-dimensional spatial region where the at least one view group is located. The computer device encapsulates the multi-view video data and the first spatial region information.

SPATIAL AUDIO CAPTURE AND ANALYSIS WITH DEPTH
20220345813 · 2022-10-27 ·

Spatial audio signals can include audio objects that can be respectively encoded and rendered at each of multiple different depths. In an example, a method for encoding a spatial audio signal can include receiving audio scene information from an audio capture source in an environment, and receiving a depth characteristic of a first object in the environment. The depth characteristic can be determined using information from a depth sensor. A correlation can be identified between at least a portion of the audio scene information and the first object. The spatial audio signal can be encoded using the portion of the audio scene and the depth characteristic of the first object.

SPATIAL AUDIO CAPTURE AND ANALYSIS WITH DEPTH
20220345813 · 2022-10-27 ·

Spatial audio signals can include audio objects that can be respectively encoded and rendered at each of multiple different depths. In an example, a method for encoding a spatial audio signal can include receiving audio scene information from an audio capture source in an environment, and receiving a depth characteristic of a first object in the environment. The depth characteristic can be determined using information from a depth sensor. A correlation can be identified between at least a portion of the audio scene information and the first object. The spatial audio signal can be encoded using the portion of the audio scene and the depth characteristic of the first object.

Image Interpolation Method and Device Based on RGB-D Image and Multi-Camera System

The present invention discloses an image interpolation method and device based on RGB-D images and a multi-camera system, wherein the method comprises performing camera calibration on each camera in the multi-camera system; clarifying a position of a new camera for interpolation according to position information of the each camera in the multi-camera system, and calculating a camera pose of the new camera according to camera calibration data; calculating a plurality of initial interpolated images that have a one-to-one correspondence with designated images captured by the each camera of the multi-camera system according to a projection relationship of the camera and the pose information of the each camera; performing image fusion on each initial interpolated image to obtain a fused interpolated image; and performing pixel completion on the fused interpolated image so as to obtain an interpolated image related to the new camera.

Image Interpolation Method and Device Based on RGB-D Image and Multi-Camera System

The present invention discloses an image interpolation method and device based on RGB-D images and a multi-camera system, wherein the method comprises performing camera calibration on each camera in the multi-camera system; clarifying a position of a new camera for interpolation according to position information of the each camera in the multi-camera system, and calculating a camera pose of the new camera according to camera calibration data; calculating a plurality of initial interpolated images that have a one-to-one correspondence with designated images captured by the each camera of the multi-camera system according to a projection relationship of the camera and the pose information of the each camera; performing image fusion on each initial interpolated image to obtain a fused interpolated image; and performing pixel completion on the fused interpolated image so as to obtain an interpolated image related to the new camera.

Imaging processing apparatus and method extracting a second RGB ToF feature points having a correlation between the first RGB and TOF feature points

An image processing apparatus and method of extracting a second RGB feature point and a second ToF feature point such that a correlation between the first RGB feature point and the first ToF feature point is equal to or greater than a predetermined value; calculating an error value between the second RGB feature point and the second ToF feature point; updating pre-stored calibration data when the error value is greater than a threshold value, and calibrating the RGB image and the ToF image by using the updated calibration data; and synthesizing the calibrated RGB and ToF images.