Patent classifications
H04N13/271
CAPTURING AND ALIGNING PANORAMIC IMAGE AND DEPTH DATA
This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.
INPUT PARAMETER BASED IMAGE WAVES
A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.
INPUT PARAMETER BASED IMAGE WAVES
A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.
Method and system for correcting temperature error of depth camera
A method for correcting errors of a depth camera caused by the temperature includes: obtaining, by at least two depth cameras, a depth image of a current target, wherein two adjacent depth cameras of the at least two depth cameras have a common field of view; modeling a measurement error of the two adjacent depth cameras caused by a temperature change; and correcting the depth image using the modeled measurement error, wherein the corrected depth image has a minimum depth difference in the common field of view.
System and method for concurrent odometry and mapping
An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose. The localized pose corrects drift in the estimated pose generated by the motion tracking module.
System and method for concurrent odometry and mapping
An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose. The localized pose corrects drift in the estimated pose generated by the motion tracking module.
Measurement device and measurement method
A measurement device includes: an operation unit that receives an operation of a user; an acquisition unit that acquires depth information indicating a depth image and color information indicating a color image of an object; a controller that calculates, based on at least one of the depth information and the color information, first dimension of the object; and a display unit that displays a frame image showing a contour shape of the object to be superimposed on the color image, the contour shape being based on the first dimension. The operation unit receives a selection and an input of a change amount of the adjustment target plane by the user. The controller calculates, second dimension of the object when the adjustment target plane is moved in a normal direction based on the change amount, and changes the frame image to show a contour shape based on the second dimension.
Method of generating volume hologram using point cloud and mesh
Disclosed is a method of generating a volume hologram using a point cloud and a mesh, in which a weight is given to a brightness of a light source according to a direction of a light in order to record a hologram of better quality. The method includes: (a) acquiring multi-view depth and color images; (b) generating point cloud data of a three-dimensional object from the acquired multi-view depth and color images; (c) generating mesh data of the three-dimensional object from the point cloud data of the three-dimensional object; (d) calculating a normal vector of each mesh from the mesh data of the three-dimensional object; (e) extracting three-dimensional data at a user viewpoint from the mesh data of the three-dimensional object by using the normal vector of the mesh; and (f) generating hologram data from three-dimensional data at the user viewpoint.
SYSTEM AND METHOD FOR REFLECTION REMOVAL USING DUAL-PIXEL SENSOR
A system and method for reflection removal of an image from a dual-pixel sensor. The image including a left view and a right view. The method including: determining a first gradient of the left view and a second gradient of the right view; determining disparity between the first gradient and the second gradient using a sum of squared differences (SSD); determining a confidence value at each pixel using the SSD; determining a weighted gradient map using the confidence values; minimizing a cost function to estimate the background layer, the cost function including the weighted gradient map, wherein the image includes the background layer added to a reflection layer; and outputting at least one of the background layer and the reflection layer.
SYSTEM AND METHOD FOR REFLECTION REMOVAL USING DUAL-PIXEL SENSOR
A system and method for reflection removal of an image from a dual-pixel sensor. The image including a left view and a right view. The method including: determining a first gradient of the left view and a second gradient of the right view; determining disparity between the first gradient and the second gradient using a sum of squared differences (SSD); determining a confidence value at each pixel using the SSD; determining a weighted gradient map using the confidence values; minimizing a cost function to estimate the background layer, the cost function including the weighted gradient map, wherein the image includes the background layer added to a reflection layer; and outputting at least one of the background layer and the reflection layer.