H04N13/271

Capturing and aligning panoramic image and depth data

This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.

IMAGING SYSTEM INCLUDING LIGHT SOURCE, IMAGE SENSOR, AND DOUBLE-BAND PASS FILTER
20170347086 · 2017-11-30 ·

An imaging system includes a light source that, in operation, emits an emitted light containing a near-infrared light in a first wavelength region, an image sensor, and a double-band pass filter that transmits a visible light in at least a part of a wavelength region out of a visible region and the near-infrared light in the first wavelength region. The image sensor includes light detection cells, a first filter that selectively transmits the near-infrared light in the first wavelength region, second to fourth filters that selectively transmit lights in second to fourth wavelength regions, respectively, which are contained in the visible light, and an infrared absorption filter. The infrared absorption filter faces the second to fourth filters and absorbs the near-infrared light in the first wavelength region.

METHOD FOR MEASURING DEPTH OF FIELD AND IMAGE PICKUP DEVICE USING SAME
20170347014 · 2017-11-30 ·

A method for measuring a depth of field is provided. An initial depth-of-field data is acquired through two optical lens modules and another depth-of-field data is obtained according to the phase detection pixel groups of the image captured by one of the optical lens modules. Consequently, even if objects in the scene are arranged along the same direction as the two optical lens modules, the error of the initial depth-of-field data can be compensated. Moreover, an image pickup device using the method is provided.

MULTISENSORY DATA FUSION SYSTEM AND METHOD FOR AUTONOMOUS ROBOTIC OPERATION

A robotic system includes one or more optical sensors configured to separately obtain two dimensional (2D) image data and three dimensional (3D) image data of a brake lever of a vehicle, a manipulator arm configured to grasp the brake lever of the vehicle, and a controller configured to compare the 2D image data with the 3D image data to identify one or more of a location or a pose of the brake lever of the vehicle. The controller is configured to control the manipulator arm to move toward, grasp, and actuate the brake lever of the vehicle based on the one or more of the location or the pose of the brake lever.

Method for image processing of image data for varying image quality levels on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.

Method for image processing of image data for varying image quality levels on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.

Method for determining a visual effect of an ophthalmic lens

A method implemented by computer means for determining a visual effect of an ophthalmic lens, the method comprising: —an optical data receiving step (S1), during which optical data relating to the optical function of an ophthalmic lens is received, —an acquisition step (S2), during which at least one image of the visual environment of a user is acquired, —a depth map determining step (S3), during which a depth map of the acquired image of the visual environment of the user is determined, —a visual effect determining step (S4), during which based on the depth map and the optical data, a visual effect that would be introduced by the ophthalmic lens if the visual environment was seen through the ophthalmic lens is determined.

Stereo camera and automatic range finding method for measuring a distance between stereo camera and reference plane
09832455 · 2017-11-28 · ·

An automatic range finding method is applied to measure a distance between a stereo camera and a reference plane. The automatic range finding method includes acquiring a disparity-map video by the stereo camera facing the reference plane, analyzing the disparity-map video to generate a depth histogram, selecting a pixel group having an amount greater than a threshold from the depth histogram, calculating the distance between the stereo camera and the reference plane by weight transformation of the pixel group, and applying a coarse-to-fine computation for the disparity-map video.

Multiscale depth estimation using depth from defocus
09832456 · 2017-11-28 · ·

To extend the working range of depth from defocus (DFD) particularly on small depth of field (DoF) images, DFD is performed on an image pair at multiple spatial resolutions and the depth estimates are then combined. Specific implementations construct a Gaussian pyramid for each image of an image pair, perform DFD on the corresponding pair of images at each level of the two image pyramids, convert DFD depth scores to physical depth values using calibration curves generated for each level, and combine the depth values from all levels in a coarse-to-fine manner to obtain a final depth map that covers the entire depth range of the scene.

IMAGE CAPTURING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE CAPTURING APPARATUS CALIBRATION METHOD, ROBOT APPARATUS, METHOD FOR MANUFACTURING ARTICLE USING ROBOT APPARATUS, AND RECORDING MEDIUM
20230179732 · 2023-06-08 ·

An image capturing apparatus including a lens and a processing unit, wherein the lens includes a first region through which a first light ray passes and a second region through which a second light ray passes, wherein the first region and the second region are arranged in a predetermined direction, and wherein the processing unit sets a component of the predetermined direction as a degree of freedom in a first relative positional relationship between a predetermined position in the first region and a predetermined position in the second region is employed.