Patent classifications
G01B11/2545
Three-dimensional sensor system and three-dimensional data acquisition method
A three-dimensional sensor system includes three cameras, a projector, and a processor. The projector simultaneously projects at least two linear patterns on the surface of an object. The three cameras synchronously capture a first two-dimensional (2D) image, a second 2D image, and a third 2D image of the object, respectively. The processor extracts a first set and a second set of 2D lines from the at least two linear patterns on the first 2D image and the second 2D image, respectively; generates a candidate set of three-dimensional (3D) points from the first set and the second set of 2D lines; and selects, from the candidate set of 3D points, an authentic set of 3D points that matches a projection contour line of the object surface by: performing data verification on the candidate set of 3D points using the third 2D image, and filtering the candidate set of 3D points.
Systems, methods and devices for generating depth image
The present disclosure discloses a system, a method and a device for generating depth image. The system includes an illumination source, an optical system, a control device, and at least one set of a dynamic aperture and an image sensor, wherein the dynamic aperture is configured to dynamically change a light transmittance, an exposure start time, and an exposure end time under a control of the control device. The control device is configured to acquire a first photo and a second photo, and generate a depth image of the target scene according to the first photo, the first shooting configuration information, the second photo, and the second shooting configuration information.
FIELD CALIBRATION OF STEREO CAMERAS WITH A PROJECTOR
Calibration in the field is described for stereo and other depth camera configurations using a projector One example includes imaging the first and the second feature in a first camera of the camera system wherein the distance from the first camera to the projector is known, imaging the first and the second feature in a second camera of the camera system, wherein the distance from the second camera to the projector is known, determining a first disparity between the first camera and the second camera to the first feature, determining a second disparity between the first camera and the second camera to the second feature, and determining an epipolar alignment error of the first camera using the first and the second disparities.
Distance measuring device, imaging apparatus, moving device, robot device, and recording medium
A distance measuring device includes an imaging optical system having an optical filter and an imaging element in which a plurality of pixel portions is arranged. The optical filter is divided into three regions, the first region has a first spectral transmittance characteristic, and the second region and the third region a second spectral transmittance characteristic in which light having a longer wavelength compared to the first spectral transmittance characteristic is transmitted. The first pixel portion that configures the imaging element includes a first photoelectric conversion unit and receives light that has passing through the first region. The second pixel portion that configures the imaging element includes second and third photoelectric conversion units, and receives light that has passed through each of the second region and the third region. A distance information acquiring unit acquires distance information corresponding to parallax of image data based on each of the output signals from the second and third photoelectric conversion units.
Systems and methods for gathering data and information on surface characteristics of an object
A system is provided for gathering information and data on the surface characteristics of an object includes a projector, a table, a first camera and a second camera. The projector is suspended above the table and arranged to project a random pattern of optical indicators onto the table. The optical indicators can be dots, lines, or other such indicators. The table is arranged to hold the object to be inspected. The first camera is positioned above and to one side of the table and angled toward the table. The second camera is positioned above and to opposite side of the table and angled toward the table. The first and second cameras are arranged to capture images of the optical indicators projected onto the object. The system is further arranged to gather information and data from the captured images and determine the surface characteristics of the object from said gathered information and data.
SYSTEM AND METHOD OF SCANNING AN ENVIRONMENT USING MULTIPLE SCANNERS CONCURRENTLY
A system of generating a three-dimensional (3D) scan of an environment includes multiple 3D scanners including a first 3D scanner at respective first and second positions. The system further includes a controller coupled to the 3D scanners. The first 3D scanner acquires a first set of 3D coordinates, the first set of 3D coordinates having a first portion. The second 3D scanner acquires a second set of 3D coordinates, the second set of 3D coordinates having a second portion. The first portion and the second portion are simultaneously transmitted to the controller by the first 3D scanner and the second 3D scanner respectively, while the first set of 3D coordinates and the second set of 3D coordinates are being acquired. The controller registers the first portion and the second portion to each other while the first set of 3D coordinates and the second set of 3D coordinates are being acquired.
INTRAORAL 3D SCANNER EMPLOYING MULTIPLE MINIATURE CAMERAS AND MULTIPLE MINIATURE PATTERN PROJECTORS
A method for generating a 3D image includes driving structured light projector(s) to project a pattern of light on an intraoral 3D surface, and driving camera(s) to capture images, each image including at least a portion of the projected pattern, each one of the camera(s) comprising an array of pixels. A processor compares a series of images captured by each camera and determines which of the portions of the projected pattern can be tracked across the images. The processor constructs a three-dimensional model of the intraoral three-dimensional surface based at least in part on the comparison of the series of images. Other embodiments are also described.
OBJECT SHAPE MEASUREMENT APPARATUS AND METHOD, AND PROGRAM
Provided are an apparatus and method for measuring the shape and thickness of a transparent object. A light projecting section configured to output beams of light to a transparent object, a light receiving sensor configured to receive the beams of light that have passed through the transparent object, and a data processing section configured to analyze a received light signal in each light receiving element of the light receiving sensor are included. The light projecting section outputs, in parallel, output beams of light from a plurality of light sources, and the data processing section analyzes the received light signal in each light receiving element of the light receiving sensor and identifies a light source of any beam of light input into one light receiving element by using light source combination information that is stored in a storage section and that corresponds to a value of the received light signal. Moreover, shapes of both front and back surfaces of the transparent object are calculated by calculating a Mueller matrix representing a change in a state of a polarized beam of light output from each of the light sources of the light projecting section.
Two-camera triangulation scanner with detachable coupling mechanism
A three-dimensional (3D) scanner having two cameras and a projector is detachably coupled to a device selected from the group consisting of: an articulated arm coordinate measuring machine, a camera assembly, a six degree-of-freedom (six-DOF) tracker target assembly, and a six-DOF light point target assembly.
Estimation of spatial relationships between sensors of a multi-sensor device
In one implementation, a device has a processor, a projector, a first infrared (IR) sensor, a second IR sensor, and instructions stored on a computer-readable medium that are executed by the processor to estimate the sensor-to-sensor extrinsic parameters. The projector projects IR pattern elements onto an environment surface. The first sensor captures a first image including first IR pattern elements corresponding to the projected IR pattern elements and the device estimates 3D positions for first IR pattern elements. The second IR sensor captures a second image including second IR pattern elements corresponding to the projected IR pattern elements and the device matches the first IR pattern elements and the second IR pattern elements. Based on this matching, the device estimates a second extrinsic parameter corresponding to a spatial relationship between the first IR sensor and the second IR sensor.