Patent classifications
G01B11/2545
PATH SENSING USING STRUCTURED LIGHTING
A structured light pattern is projected onto the path of a vehicle so as to generate a plurality of light spots, and an image thereof is captured from the vehicle. A world-space elevation of at least a portion of the light spots is responsive to a pitch angle of the vehicle determined responsive to image-space locations of down-range-separated light spots.
MEASUREMENT OF THICKNESS OF THERMAL BARRIER COATINGS USING 3D IMAGING AND SURFACE SUBTRACTION METHODS FOR OBJECTS WITH COMPLEX GEOMETRIES
Embodiments described herein relate to a non-destructive measurement device measurement device and a non-destructive measurement method for determining coating thickness of a three-dimensional (3D) object. In one embodiment, at least one first 3D image of an uncoated surface of the object and at least one second 3D image of a coated surface of the object are collected and analyzed to the determine the coating thickness of the object.
System and method for automatic alignment and projection mapping
A system and method for automatic alignment and projection mapping are provided. A projector and at least two cameras are mounted with fields of view that overlap a projection area on a three-dimensional environment. A computing device: controls the projector to project structured light patterns that uniquely illuminate portions of the environment; acquires images of the patterns from the cameras; generates a two-dimensional mapping of the portions between projector and camera space and by processing the images and correlated patterns; generates a cloud of points representing the environment using the mapping and camera positions; determines a projector location, orientation and lens characteristics from the cloud; positions a virtual camera relative to a virtual three-dimensional environment, corresponding to the environment, parameters of the virtual camera respectively matching parameters of the projector; and, controls the projector to project based on a virtual location, orientation and characteristics of the virtual camera.
Broken Wheel Detection System
A broken wheel detection system for detecting broken wheels on rail vehicles even when such vehicles are moving at a high rate of speed.
STRUCTURED LIGHT 3D SENSORS WITH VARIABLE FOCAL LENGTH LENSES AND ILLUMINATORS
A method for three-dimensional imaging includes emitting an output light with a structured light illuminator in a structured light pattern, receiving a trigger command, changing a field of illumination of the illuminator, and changing a field of view of an imaging sensor. The field of view and the field of illumination are linked, such that the field of view of the imaging sensor is the same as the field of illumination of the illuminator at a short throw field of view and a long throw field of view. The method further includes detecting a reflected light with the imaging sensor and measuring a depth value by calculating a distortion of the structured light pattern.
3D MACHINE-VISION SYSTEM
One embodiment can provide a machine-vision system. The machine-vision system can include a structured-light projector, a first camera positioned on a first side of the structured-light projector, and a second camera positioned on a second side of the structured-light projector. The first and second cameras are configured to capture images under illumination of the structured-light projector. The structured-light projector can include a laser-based light source.
Methods for Measuring Properties of Rock Pieces
Provided herein is a method for measuring the size distribution and/or hardness of free falling rock pieces. The method comprises projecting at least one laser line on the falling rock pieces by a laser device; capturing images of the falling rock pieces at an angle from the at least one laser line by at least one camera; and obtaining size distribution data of the falling rock pieces based on data obtained from a topographical map generated from the captured images. Certain embodiments further comprise: obtaining at least one of the volume and area of individual rock pieces from the topographical map; conducting a data analysis on at least one of the volume and area measurements of the rock pieces to reduce at least one of sampling and measurement errors; determining the size distribution of the falling rock pieces based on the data analysis and, optionally, evaluating a rock hardness index for the rock. Further provided is a method comprising: producing two topographical maps of the pieces from captured images; and obtaining the volume of pieces from
THREE-DIMENSIONAL COMPUTER VISION BASED ON PROJECTED PATTERN OF LASER DOTS AND GEOMETRIC PATTERN MATCHING
In one embodiment, a method comprises generating, by a computing device from first and second images of a projected pattern of laser dots detected by respective camera devices in a physical environment, a stereoscopic two-dimensional (2D) object pair based on determining 2D positions for each of the laser dots detected in the first and second images, creating a first mesh of geometric patterns from the 2D positions in the first image and a corresponding second mesh of the geometric patterns from the 2D positions in the second image, and creating the stereoscopic 2D object pair based on matching corresponding geometric patterns from the first and second meshes of the geometric patterns. A three-dimensional model (3D) of the physical environment is generated based on executing stereoscopic triangulation of the stereoscopic 2D object pair. The 3D model causes a controllable device to interact with the physical environment.
UNDERWATER LASER BASED MODELING DEVICE
An image acquisition unit for obtaining data to generate at least one three-dimensional representation of at least one underwater structure is disclosed. The image acquisition unit includes a unit body, a plurality of cameras, a first laser light device, and a second laser light device. The first laser light device can operate based on a first illumination setting. The second laser light device can operate based a second illumination setting. The first and second cameras can be configured to capture light during the first illumination setting and generate a first set of data representative of the first laser projecting on the at least one underwater structure at a predetermined scan rate. The third and fourth cameras can be configured to capture light during the second illumination setting and generate a second set of data representative of the second laser projecting on the at least one underwater structure at the predetermined scan rate.
Imager for Detecting Visual Light and Projected Patterns
Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features.