Patent classifications
H04N13/271
STEREO IMAGE GENERATING METHOD AND ELECTRONIC APPARATUS UTILIZING THE METHOD
A stereo image generating method and an electronic apparatus utilizing the method are provided. The electronic apparatus includes a first camera and a second camera capable of capturing stereo images, and a resolution of the first camera is larger than that of the second camera. In the method, a first image is captured by the first camera, and a second image is captured by the second camera. The second image is upscaled to the resolution of the first camera, and a depth map is generated with use of the first image and the upscaled second image. With reference to the depth map, the first image is re-projected to reconstruct a reference image of the second image. An occlusion region in the reference image is detected and compensated by using the upscaled second image. A stereo image including the first image and the compensated reference image is generated.
COMBINING LIGHT-FIELD DATA WITH ACTIVE DEPTH DATA FOR DEPTH MAP GENERATION
Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
Critical alignment of parallax images for autostereoscopic display
A method is provided for generating an autostereoscopic display. The method includes acquiring a first parallax image and at least one other parallax image. At least a portion of the first parallax image may be aligned with a corresponding portion of the at least one other parallax image. Alternating views of the first parallax image and the at least one other parallax image may be displayed.
Hand-held electronic apparatus, image capturing apparatus and method for obtaining depth information
A hand-held electronic apparatus, an image capturing apparatus and a method for obtaining depth information are provided. The image capturing apparatus includes a time of fly (TOF) image capturer, a TOF controller, a main and sub image capturers, and a controller. The TOF image capturer calculates a TOF depth map according to a TOF image, defines an effective region and an un-effective region according to the TOF depth map, and obtains a first depth information set of the effective region. The main and sub image capturers captures a first and second images, respectively. The controller obtains a second depth information set of the un-effectively region by comparing the first and second images, and generates an overall depth map by combining the first depth information set and the second depth information set.
METHOD AND SYSTEM FOR EXTRACTING DENSE DISPARITY MAP BASED ON MULTI-SENSOR FUSION, AND INTELLIGENT TERMINAL
A method and a system for extracting a dense disparity map based on multi-sensor fusion are provided. The method includes: obtaining a left-eye image and a right-eye image in a same road scenario, and point cloud information about the road scenario; generating an initial cost volume map set in accordance with the left-eye image, the right-eye image and the point cloud information; performing multidirectional cost aggregation in accordance with the point cloud information and the initial cost volume map set, and creating an energy function in accordance with the cost aggregation; and solving an optimum disparity for each pixel in the left-eye image in accordance with the energy function, so as to generate the dense disparity map.
METHOD AND SYSTEM FOR EXTRACTING DENSE DISPARITY MAP BASED ON MULTI-SENSOR FUSION, AND INTELLIGENT TERMINAL
A method and a system for extracting a dense disparity map based on multi-sensor fusion are provided. The method includes: obtaining a left-eye image and a right-eye image in a same road scenario, and point cloud information about the road scenario; generating an initial cost volume map set in accordance with the left-eye image, the right-eye image and the point cloud information; performing multidirectional cost aggregation in accordance with the point cloud information and the initial cost volume map set, and creating an energy function in accordance with the cost aggregation; and solving an optimum disparity for each pixel in the left-eye image in accordance with the energy function, so as to generate the dense disparity map.
Three-Dimensional Image Device
A three-dimensional image device is provided. The three-dimensional image device includes a depth processor, a structured-light depth camera, and a TOF depth camera. The depth processor includes at least two input ports configured to receive first images, an input switch coupled to the at least two input ports, and a data processing engine coupled to the input switch. The at least two input ports include a first input port and a second input port, the first input port is coupled to the structured-light depth camera, and the second input port is coupled to the TOF depth camera.
ENCLOSED MULTI-VIEW VISUAL MEDIA REPRESENTATION
Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
IMAGE PICKUP DEVICE AND IMAGE PICKUP METHOD
There is provided an image pickup device and an image pickup method for estimating the depth of an image having a repetitive pattern with high accuracy. The peripheral cameras are arranged according to base line lengths based on reciprocals of different prime numbers as having a position of a reference camera, to be a reference when images from different viewpoints are imaged, as a reference. The present disclosure is capable of being applied to a light field camera and the like, for example, which includes the reference camera and the plurality of peripheral cameras, generates a parallax image from the images of plural viewpoints, and generates a refocus image by using the images from the plural viewpoints and the parallax image.
DIRTY LENS IMAGE CORRECTION
Systems and method for correcting images including artifacts due to dirty camera lenses of electronic device are disclosed. Correction of images by the systems and methods includes obtaining a first raw pixel image of a scene captured with a first camera, obtaining a second raw image of the scene captured with a second camera separate from the first camera in a camera baseline direction, rectifying the first and second raw pixel images to create respective first and second rectified pixel images, determining disparity correspondence between corresponding image pixel pairs of the first and second rectified images in the camera baseline direction, mapping first and second rectified images into the same domain using the determined disparity, detect image artifact regions within each domain mapped image by comparing corresponding regions of the domain mapped images, determining correction factors for each detected image artifact region, and correcting the rectified first and second images by applying the determined correction factors.