H04N13/271

3D imaging system and method

A 3D imaging system includes an optical modulator for modulating a returned portion of a light pulse as a function of time. The returned light pulse portion is reflected or scattered from a scene for which a 3D image or video is desired. The 3D imaging system also includes an element array receiving the modulated light pulse portion and a sensor array of pixels, corresponding to the element array. The pixel array is positioned to receive light output from the element array. The element array may include an array of polarizing elements, each corresponding to one or more pixels. The polarization states of the polarizing elements can be configured so that time-of-flight information of the returned light pulse can be measured from signals produced by the pixel array, in response to the returned modulated portion of the light pulse.

3D imaging system and method

A 3D imaging system includes an optical modulator for modulating a returned portion of a light pulse as a function of time. The returned light pulse portion is reflected or scattered from a scene for which a 3D image or video is desired. The 3D imaging system also includes an element array receiving the modulated light pulse portion and a sensor array of pixels, corresponding to the element array. The pixel array is positioned to receive light output from the element array. The element array may include an array of polarizing elements, each corresponding to one or more pixels. The polarization states of the polarizing elements can be configured so that time-of-flight information of the returned light pulse can be measured from signals produced by the pixel array, in response to the returned modulated portion of the light pulse.

Time-of-flight depth image processing systems and methods

Time of Flight (ToF) depth image processing methods. Depth edge preserving filters are disclosed with superior performance to standard edge preserving filters applied to depth maps. In particular, depth variance is estimated and used to filter while preserving depth edges. In doing so, filter strength is calculated which can be used as an edge detector. A confidence map is generated with low confidence at pixels straddling a depth edge, and which reflects the reliability of the depth measurement at each pixel.

Time-of-flight depth image processing systems and methods

Time of Flight (ToF) depth image processing methods. Depth edge preserving filters are disclosed with superior performance to standard edge preserving filters applied to depth maps. In particular, depth variance is estimated and used to filter while preserving depth edges. In doing so, filter strength is calculated which can be used as an edge detector. A confidence map is generated with low confidence at pixels straddling a depth edge, and which reflects the reliability of the depth measurement at each pixel.

Method and apparatus for processing three-dimensional images
11212507 · 2021-12-28 · ·

A method for transmitting an image according to the present disclosure comprises the steps of: acquiring three-dimensional stereoscopic images; creating a color equirectangular projection (ERP) image and a depth ERP image from the three-dimensional stereoscopic images; and transmitting the color ERP image and the depth ERP image. Pixels of the color ERP image correspond to pixels of the depth ERP image, the pixels of the color ERP image comprise color information, and the pixels of the depth ERP image comprise depth information of corresponding pixels of the color ERP image. The step of creating the color ERP image and the depth ERP image from the three-dimensional stereoscopic images may comprise the steps of: forming a concentric sphere having a specific point on three-dimensional coordinates as a center point; mapping points of the three-dimensional stereoscopic images to the surface of the concentric sphere; generating the color ERP image on the basis of color information of the mapped points; and generating the depth ERP image on the basis of depth information of the mapped points. The three-dimensional stereoscopic images may include three-dimensional position information of the points and color information of the points.

System and method of imaging using multiple illumination pulses

Imaging systems and methods are disclosed that use multiple illumination pulses to irradiate a scene of interest. An example system includes optics to receive a sequence of returned light pulse portions scattered or reflected from the scene. A modulator is configured to modulate the intensity of the returned light pulse portions to form a sequence of modulated light pulse portions. A means of selectively exposing a sensor during a sequence of exposure periods is also included, so that each modulated light pulse portion is received by the sensor during one of the exposure periods, respectively. The sensor is configured to generate a sequence of sensor signals as a result of receiving the modulated light pulse portions and to accumulate the sensor signals to form a sensor output signal. A processor may read the sensor output signal and process the image based on that signal.

System and method of imaging using multiple illumination pulses

Imaging systems and methods are disclosed that use multiple illumination pulses to irradiate a scene of interest. An example system includes optics to receive a sequence of returned light pulse portions scattered or reflected from the scene. A modulator is configured to modulate the intensity of the returned light pulse portions to form a sequence of modulated light pulse portions. A means of selectively exposing a sensor during a sequence of exposure periods is also included, so that each modulated light pulse portion is received by the sensor during one of the exposure periods, respectively. The sensor is configured to generate a sequence of sensor signals as a result of receiving the modulated light pulse portions and to accumulate the sensor signals to form a sensor output signal. A processor may read the sensor output signal and process the image based on that signal.

Residual error mitigation in multiview calibration

Multiview calibration is essential for accurate three-dimensional computation. However, multiview calibration can not be accurate enough because of the tolerances required in some of the intrinsic and extrinsic parameters that are associated with the calibration process, along with fundamental imperfections that are associated with the manufacturing and assembly process itself. As a result, residual error in calibration is left over, with no known methods to mitigate such errors. Residual error mitigation is presented in this work to address the shortcomings that are associated with calibration of multiview camera systems. Residual error mitigation may be performed inline with a given calibration approach, or may be presented as a secondary processing step that is more application specific. Residual error mitigation aims at modifying the original parameters that have been estimated during an initial calibration process. These new, modified parameters are then used for triangulation and depth estimation of scene information. This approach also resolves parameter tolerances that are either too cumbersome to measure, or otherwise impossible to measure for practical stereo and multiview camera production and calibration applications.

Method and apparatus for immersive video formatting

Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.

Method and System for Creating an Out-of-Body Experience

A computer-implemented method for inducing an Out of Body Experience (OBE) in a user through an augmented/virtual reality (ARNR) system, the OBE including an exit state and a disembodiment state, the method comprising the steps of (a) changing the user viewpoint from body -centered viewpoint to distanced viewpoints, thereby inducing an OBE exit state, and (b) showing to the user his/her own body from the distanced viewpoints, thereby inducing an OBE disembodiment state.