H04N13/271

Application processor for disparity compensation between images of two cameras in digital photographing apparatus

A digital photographing device may include a plurality of cameras on a common side of the device, an application processor for switching image capture between the camera, and a display. The application processor may switch images output on the display when the cameras are switched. During the image transition, one or more virtual viewpoint images are output between a pre-transition image and a post-transition image. The virtual viewpoint images interpolate a disparity between the pre-transition image and the post-transition image caused by the different cameras being located at different positions, and result in a smooth visual transition. When a camera switching input includes a zoom factor signal, the virtual viewpoint images may be compensated images according to the input zoom factor and a disparity.

Application processor for disparity compensation between images of two cameras in digital photographing apparatus

A digital photographing device may include a plurality of cameras on a common side of the device, an application processor for switching image capture between the camera, and a display. The application processor may switch images output on the display when the cameras are switched. During the image transition, one or more virtual viewpoint images are output between a pre-transition image and a post-transition image. The virtual viewpoint images interpolate a disparity between the pre-transition image and the post-transition image caused by the different cameras being located at different positions, and result in a smooth visual transition. When a camera switching input includes a zoom factor signal, the virtual viewpoint images may be compensated images according to the input zoom factor and a disparity.

Determining positions and orientations of objects

Methods and apparatus for determining poses of objects acquire plural images of the objects from different points of view. The images may be obtained by plural cameras arranged in a planar array. Each image may be processed to identify features such as contours of objects. The images may be projected onto different depth planes to yield depth plane images. The depth plane images for each depth plane may be compared to identify features lying in the depth plane. A pattern matching algorithm may be performed on the features lying in the depth plane to determine the poses of one or more of the objects. The described apparatus and methods may be applied in bin-picking and other applications.

Determining positions and orientations of objects

Methods and apparatus for determining poses of objects acquire plural images of the objects from different points of view. The images may be obtained by plural cameras arranged in a planar array. Each image may be processed to identify features such as contours of objects. The images may be projected onto different depth planes to yield depth plane images. The depth plane images for each depth plane may be compared to identify features lying in the depth plane. A pattern matching algorithm may be performed on the features lying in the depth plane to determine the poses of one or more of the objects. The described apparatus and methods may be applied in bin-picking and other applications.

Modification of a live-action video recording using volumetric scene reconstruction to replace a designated region

A main video sequence of a live action scene is captured along with ancillary device data to provide corresponding volumetric information about the scene. The volumetric data can then be used to visually remove or replace objects in the main video sequence. A removed object is replaced by the view that would have been captured by the main video sequence had the removed object not been present in the live action scene at the time of capturing.

THREE-DIMENSIONAL DATA MULTIPLEXING METHOD, THREE-DIMENSIONAL DATA DEMULTIPLEXING METHOD, THREE-DIMENSIONAL DATA MULTIPLEXING DEVICE, AND THREE-DIMENSIONAL DATA DEMULTIPLEXING DEVICE
20210358175 · 2021-11-18 ·

A three-dimensional data multiplexing method includes: multiplexing pieces of data of a plurality of types including point cloud data to generate an output signal having a file configuration that is predetermined; and storing, in metadata in the file configuration, information indicating a type of each of the pieces of data included in the output signal.

THREE-DIMENSIONAL LOCALIZATION METHOD, SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM
20210358150 · 2021-11-18 ·

Systems and methods are described for three-dimensional localization using light-depth images. For example, some of the methods include accessing a light-depth image, wherein the light-depth image includes a non-visible light depth channel representing distances of objects in a scene viewed from an image capture device, and the light-depth image includes one or more visible light channels that are temporally and spatially synchronized with the depth channel; determining a set of features of the scene in a space based on the light-depth image; accessing a map data structure that includes features based on light data and position data for the objects in the space; accessing matching data derived by matching the set of features of the scene to features of the map data structure; determining a location of the image capture device relative to objects in the space based on the matching data.

UPSAMPLING LOW TEMPORAL RESOLUTION DEPTH MAPS

Systems and methods are provided for upsampling low temporal resolution depth maps. This upsampling is performed by obtaining a stereo pair of images of a scene captured at a first timepoint, generating a first depth map of the scene for the first timepoint by performing stereo matching on the stereo pair of images, obtaining a subsequent stereo pair of images captured at a subsequent timepoint to the first timepoint, and generating a subsequent depth map that corresponds to the subsequent timepoint by applying an edge-preserving filter using the first depth map without performing stereo matching on the subsequent stereo pair of images.

SYSTEMS AND METHODS FOR TEMPORALLY CONSISTENT DEPTH MAP GENERATION

Systems and methods are provided for performing temporally consistent depth map generation by implementing acts of obtaining a first stereo pair of images of a scene associated with a first timepoint and a first pose, generating a first depth map of the scene based on the first stereo pair of images, obtaining a second stereo pair of images of the scene associated with at a second timepoint and a second pose, generating a reprojected first depth map by reprojecting the first depth map to align the first depth map with the second stereo pair of images, and generating a second depth map that corresponds to the second stereo pair of images using the reprojected first depth map.

SYSTEMS AND METHODS FOR TEMPORALLY CONSISTENT DEPTH MAP GENERATION

Systems and methods are provided for performing temporally consistent depth map generation by implementing acts of obtaining a first stereo pair of images of a scene associated with a first timepoint and a first pose, generating a first depth map of the scene based on the first stereo pair of images, obtaining a second stereo pair of images of the scene associated with at a second timepoint and a second pose, generating a reprojected first depth map by reprojecting the first depth map to align the first depth map with the second stereo pair of images, and generating a second depth map that corresponds to the second stereo pair of images using the reprojected first depth map.