H04N13/271

Field calibration of stereo cameras with a projector

Calibration in the field is described for stereo and other depth camera configurations using a projector One example includes imaging the first and the second feature in a first camera of the camera system wherein the distance from the first camera to the projector is known, imaging the first and the second feature in a second camera of the camera system, wherein the distance from the second camera to the projector is known, determining a first disparity between the first camera and the second camera to the first feature, determining a second disparity between the first camera and the second camera to the second feature, and determining an epipolar alignment error of the first camera using the first and the second disparities.

Apparatus and method for representing a spatial image of an object in a virtual environment

Shown is an apparatus for representing a spatial image of an object in a virtual environment. The apparatus includes a first image capturing element configured to generate a first stereoscopic image data stream of an environment. A second image capturing element may generate a second stereoscopic image data stream of the object. A computing unit is configured to receive the first and the second stereoscopic image data streams, proceeding from a reference point, to generate a spatial image of the virtual environment based on the first stereoscopic image data stream and to insert, proceeding from the reference point, the object from the second stereoscopic image data stream into the virtual environment. A display element may, proceeding from the reference point, display the spatial image of the object in the virtual environment so that a viewer of the display element is given the impression of a 3D object in a 3D environment.

Stereoscopic imaging device and method for image processing

A stereoscopic imaging device includes at least a first and a second image recording unit configured to record a first and a second original image of an object from different perspectives, wherein the original images differ at least with regard to one item of image information, an image display unit for imaging displayed images, an image processing unit for further processing the original images, and the image processing unit is configured to supplement at least one of the two original images with at least one item of image information from the other original image to generate a displayed image. In addition, a method for generating at least one displayed image that can be imaged on an image display unit is provided.

Multiple camera system with flash for depth map generation
11657529 · 2023-05-23 · ·

An example operation of depth map generation includes one or more of, simultaneously capturing a main-off camera image and an auxiliary-off camera image with an unpowered flash, sparse depth mapping an object based on the main-off camera image and the auxiliary-off camera image, capturing a main-on camera image with a powered flash, foreground probability mapping the object based on the main-off camera image and the main-on camera image and dense depth mapping the object based on the sparse depth map and the foreground probability map.

Multiple camera system with flash for depth map generation
11657529 · 2023-05-23 · ·

An example operation of depth map generation includes one or more of, simultaneously capturing a main-off camera image and an auxiliary-off camera image with an unpowered flash, sparse depth mapping an object based on the main-off camera image and the auxiliary-off camera image, capturing a main-on camera image with a powered flash, foreground probability mapping the object based on the main-off camera image and the main-on camera image and dense depth mapping the object based on the sparse depth map and the foreground probability map.

VOLTAGE REGULATION FOR INCREASED ROBUSTNESS IN INDIRECT TIME-OF-FLIGHT SENSORS
20230147085 · 2023-05-11 ·

A time-of-flight sensor includes an integrated circuit chip in which a voltage regulator and a load are disposed. The load includes a grouping of pixel circuits and modulation driver that is supplied power from the voltage regulator. The grouping of pixel circuits included in a pixel array disposed in the integrated circuit trip. Each one of the pixel circuits includes a photodiode configured to photogenerate charge in response to reflected modulated light, a floating diffusion configured to store a portion of charge photogenerated in the photodiode, and transfer transistor to transfer the portion of charge from the photodiode to the floating diffusion in response to a phase modulation signal generated by the modulation driver. A feedback circuit is coupled between the load and the voltage regulator and is coupled to receive a feedback signal from the feedback circuit in response to the load.

VOLTAGE REGULATION FOR INCREASED ROBUSTNESS IN INDIRECT TIME-OF-FLIGHT SENSORS
20230147085 · 2023-05-11 ·

A time-of-flight sensor includes an integrated circuit chip in which a voltage regulator and a load are disposed. The load includes a grouping of pixel circuits and modulation driver that is supplied power from the voltage regulator. The grouping of pixel circuits included in a pixel array disposed in the integrated circuit trip. Each one of the pixel circuits includes a photodiode configured to photogenerate charge in response to reflected modulated light, a floating diffusion configured to store a portion of charge photogenerated in the photodiode, and transfer transistor to transfer the portion of charge from the photodiode to the floating diffusion in response to a phase modulation signal generated by the modulation driver. A feedback circuit is coupled between the load and the voltage regulator and is coupled to receive a feedback signal from the feedback circuit in response to the load.

SAMPLING ACROSS MULTIPLE VIEWS IN SUPERSAMPLING OPERATION

Sampling across multiple views in supersampling operation is described. An example of an apparatus includes one or more processing resources configured to perform a supersampling operation for image data generated for multiple views utilizing one or more neural networks, the processing resources including at least a first circuitry to process a first current frame including first image data for a first view, and a second circuitry to process a second current frame including second image data for a second view, the first view and second view being displaced from each other, the processing resources to receive for processing the first current frame and the second current frame, and perform supersampling processing utilizing the one or more neural networks based on at least the first current frame and the second current frame and one or more prior generated frames for each of the views.

SAMPLING ACROSS MULTIPLE VIEWS IN SUPERSAMPLING OPERATION

Sampling across multiple views in supersampling operation is described. An example of an apparatus includes one or more processing resources configured to perform a supersampling operation for image data generated for multiple views utilizing one or more neural networks, the processing resources including at least a first circuitry to process a first current frame including first image data for a first view, and a second circuitry to process a second current frame including second image data for a second view, the first view and second view being displaced from each other, the processing resources to receive for processing the first current frame and the second current frame, and perform supersampling processing utilizing the one or more neural networks based on at least the first current frame and the second current frame and one or more prior generated frames for each of the views.

Systems and methods for self-supervised depth estimation according to an arbitrary camera

System, methods, and other embodiments described herein relate to improving depth estimates for monocular images using a neural camera model that is independent of a camera type. In one embodiment, a method includes receiving a monocular image from a pair of training images derived from a monocular video. The method includes generating, using a ray surface network, a ray surface that approximates an image character of the monocular image as produced by a camera having the camera type. The method includes creating a synthesized image according to at least the ray surface and a depth map associated with the monocular image.