Patent classifications
G01B11/2545
Dynamic vision sensor and projector for depth imaging
Systems, devices, and techniques related to matching features between a dynamic vision sensor and one or both of a dynamic projector or another dynamic vision sensor are discussed. Such techniques include casting a light pattern with projected features having differing temporal characteristics onto a scene and determining the correspondence(s) based on matching changes in detected luminance and temporal characteristics of the projected features.
THREE-DIMENSIONAL SCANNER HAVING SENSORS WITH OVERLAPPING FIELDS OF VIEW
A system includes a projector configured to project a plurality of non-coded elements onto an object, the projector having a first optical axis. The system includes a first camera having a first lens and a first sensor. The first lens defines a second optical axis. The system includes a second camera having a second lens and a second sensor. The second lens defines a third optical axis. The projector, the first camera, and the second camera are disposed on a substantially straight line in a first direction. The first optical axis is substantially parallel to the second optical axis, which is substantially parallel to the third optical axis. A center of the first sensor is displaced along the first direction away from the second optical axis, and a center of the second sensor is displaced along the first direction away from the third optical axis.
Systems and Methods for Estimating Depth from Projected Texture using Camera Arrays
Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays. One embodiment of the invention includes: at least one two-dimensional array of cameras comprising a plurality of cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images. Also, generating a depth estimate for a given pixel location in the image from the reference viewpoint includes: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles; comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.
Intensity-modulated light pattern for active stereo
The subject disclosure is directed towards projecting light in a pattern in which the pattern contains components (e.g., spots) having different intensities. The pattern may be based upon a grid of initial points associated with first intensities and points between the initial points with second intensities, and so on. The pattern may be rotated relative to cameras that capture the pattern, with captured images used active depth sensing based upon stereo matching of dots in stereo images.
Active stereo with adaptive support weights from a separate image
Systems and methods for stereo matching based upon active illumination using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images is provided. To correlate pixels in actively illuminated stereo images, adaptive support weights computations are used to determine similarity of patches corresponding to the pixels. In order to obtain adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (clean) image.
METHOD FOR THE THREE DIMENSIONAL MEASUREMENT OF MOVING OBJECTS DURING A KNOWN MOVEMENT
A 3D measurement method including: projecting a pattern sequence onto a moving object; capturing a first image sequence with a first camera and a second image sequence synchronously to the first image sequence with a second camera; determining corresponding image points in the two sequences; computing a trajectory of a potential object point from imaging parameters and from known movement data for each pair of image points that is to be checked for correspondence. The potential object point is imaged by both image points in case they correspond. Imaging object positions derived therefrom at each of the capture points in time into image planes respectively of the two cameras. Corresponding image points positions are determined as trajectories in the two cameras and the image points are compared with each other along predetermined image point trajectories and examined for correspondence; lastly performing 3D measurement of the moved object by triangulation.
Method and apparatus for determining 3D coordinates of at least one predetermined point of an object
In a method for determining 3D coordinates of at least one predetermined point of an object, the object is arranged in a measurement region and a variable illumination source projects a variable pattern onto the object arranged in the measurement region. An image recording apparatus which is arranged in a previously known relationship with respect to the illumination source records an image of at least one section of the object illuminated by the variable illumination source. The at least one predetermined point is detected in the recorded image. The 3D coordinates of the at least one predetermined point are determined from the recorded image, taking into account the previously known relationship of the image recording apparatus with respect to the illumination source, if a check of the recorded image reveals that the at least one predetermined point is marked in the recorded image by a feature of the variable pattern.
Handheld large-scale three-dimensional measurement scanner system simultaneously having photogrammetric and three-dimensional scanning functions
A handheld large-scale three-dimensional measurement scanner system simultaneously having photogrammetric and three-dimensional scanning functions includes: two cameras at fixed positions; at least one pattern projector; a photogrammetric module; and a three-dimensional scanning module, wherein at least one of the two cameras is a multipurpose camera that performs photogrammetry and three-dimensional scanning, wherein the photogrammetric module is configured to perform, by operating the multipurpose camera, global photogrammetry on a measured object and obtain three-dimensional coordinates of markers on a surface of the object, and wherein the three-dimensional scanning module is configured to perform, by operating the two cameras and the one pattern projector, three-dimensional scanning on the measured object by using the obtained markers as global positioning information of the measured object, and obtain three-dimensional contour data of the surface of the object. The present invention has photogrammetric and three-dimensional scanning functions simultaneously, a high degree of hardware integration, simplicity in operation, and excellent cost-performance ratio.
VEHICLE SURFACE SCANNING SYSTEM
An improved vehicle surface scanning system for assessing the damaged surfaces of a vehicle with resulting estimates of repair costs. A mobile scanning booth is assembled in an open-ended tunnel-like rig having a plurality of reflective panels positioned along opposite sides and across the roof of the booth to serve as deflection screens. A plurality of scanner modules are mounted in fixed positions about opposite ends of the booth and positioned to face the interior of the booth. Wheels provide controlled locomotion/movement of the scanning booth over the vehicle. The scanner modules use a combined hybrid methodology of active stereo 3D reconstruction and deflectometry to acquire data measurements along the surfaces of the vehicle incrementally as the booth is moved. The incremental measurement data acquired during the mobile scanning is processed and furthermore combined to produce accurate reports of the damage surfaces and estimates of associated repair costs.
Method and apparatus for vehicle inspection and safety system calibration using projected images
A vehicle service system and method to determine spatial parameters of a vehicle, employing a display system under processor control, to display or project visible indicia onto surfaces in proximity to a vehicle undergoing a safety system service or inspection identifying one or more locations, relative to the determined vehicle centerline or thrust line, at which a calibration fixture, optical target, or simulated test drive imagery is visible for observation by a sensor onboard the vehicle.