G01B11/2545

Triangulation scanner having flat geometry and projecting uncoded spots

A projector projects an uncoded pattern of uncoded spots onto an object, which is imaged by a first camera and a second camera, 3D coordinates of the spots on the object being determined by a processor based on triangulation, the processor further determining correspondence among the projected and imaged spots based at least in part on a nearness of intersection of lines drawn from the projector and image spots through their respective perspective centers.

Projection device and projection method

A projection device can include: an illuminating unit for emitting light; and a projection unit having a mirror surface, the projection unit being designed to project the light emitted by the illuminating device by means of the mirror surface into an object space and to shape it into different spatially structured light patterns in the object space. The projection device is distinguished in that the mirror surface is deformable, at least in regions, and in that the projection unit, for forming the different spatially structured light patterns in the object space, has at least one actuator for deforming the mirror surface, at least in regions. The present subject matter furthermore relates to a projection method, to a device, and to a method for detecting a three-dimensional contour.

VOLUME MEASURING APPARATUS AND VOLUME MEASURING METHOD FOR BOX
20210166413 · 2021-06-03 ·

A volume measuring apparatus having a first camera, a second camera, an emitting unit and a processing unit is disclosed. The processing unit controls the emitting unit to emit invisible structure light, and controls the first and second camera to capture a left and a right image both containing a target-box. The processing unit generates a depth graph according to the left and right image, and scans the depth graph through multiple scanning lines for determining a middle line, a bottom line, a left-sideline, and a right-sideline of the target-box in the depth graph. The processing unit performs scanning, within a range of the middle line, the bottom line, the left-sideline, and the right-sideline, for obtaining a plurality of width information, height information, and length information. The processing unit computes the volume related data of the target-box according to the plurality of width information, height information, and length information.

Multiple camera microscope imaging with patterned illumination

An array of more than one digital micro-camera, along with the use of patterned illumination and a digital post-processing operation, jointly create a multi-camera patterned illumination (MCPI) microscope. Each micro-camera includes its own unique lens system and detector. The field-over-view of each micro-camera unit at least partially overlaps with the field-of-view of one or more other micro-camera units within the array. The entire field-of-view of a sample of interest is imaged by the entire array of micro-cameras in a single snapshot. In addition, the MCPI system uses patterned optical illumination to improve its effective resolution. The MCPI system captures one or more images as the patterned optical illumination changes its distribution across space and/or angle at the sample. Then, the MCPI system digitally combines the acquired image sequence using a unique post-processing algorithm.

Enhanced stereoscopic imaging

An apparatus for enhancing stereoscopic imaging include a light projector, a stereo camera and a control circuit. The light projector may be configured to project a pattern of light in a direction. The pattern may include (i) a background pattern that illuminates an area along the direction and (ii) a textured pattern that varies an intensity of the light in the area. The stereo camera may be configured to generate two sequences of synchronized images by imaging the area. The control circuit may be configured to (i) control power to the light projector, (ii) receive the two sequences of synchronized images from the stereo camera and (iii) generate one or more output signals in response to the two sequences of synchronized images.

Image observing device, image observing method, image observing program, and computer-readable recording medium
11009344 · 2021-05-18 · ·

A control section 200 executes photographing processing for controlling the light projecting section and the light receiving section to photograph a measurement object placed on a stage, contour extracting processing for extracting a contour of the measurement object from an image of the measurement object, storing processing for determining whether the measurement object is present in rectangular regions adjacent to a photographing visual field and causing a storing section to store coordinate positions of one or more of the rectangular regions where it is determined that the measurement object is present, driving processing for driving the stage-plane-direction driving section to move the photographing visual field to any one of the coordinate positions stored in the storing section by the storing processing, and coupled-image generation processing for generating a coupled image by coupling images of the rectangular regions adjacent to one another obtained by repeatedly executing the photographing processing.

Shape measuring device and shape measuring method
11002535 · 2021-05-11 · ·

A shape measuring device 100 includes a control section 200 for executing determination processing for determining an unmeasured region having height information is present outside a depth measurement range, which is a height range in which pattern light can be irradiated from a light projecting section, focal-position changing processing for controlling an optical-axis-direction driving section to change a focal position of a light receiving section when it is determined by the determination processing that the unmeasured region is present, and synthesis processing for generating synthesized stereoscopic shape data obtained by combining a plurality of stereoscopic shape data generated by automatically repeating the stereoscopic-shape-data acquisition processing, the determination processing, and the focal-position-changing processing until it is determined by the determination processing that the unmeasured region is absent or a predetermined end condition is satisfied.

THREE-DIMENSIONAL MEASUREMENT DEVICE

A system and method of determining three-dimensional coordinates is provided. The method includes, with a projector, projecting onto an object a projection pattern that includes collection of object spots. With a first camera, a first image is captured that includes first-image spots. With a second camera, a second image is captured that includes second-image spots. Each first-image spot is divided into first-image spot rows. Each second-image spot is divided into second-image spot rows. Central values are determined for each first-image and second-image spot row. A correspondence is determined among first-image and second-image spot rows, the corresponding first-image and second-image spot rows being a spot-row image pair. Tach spot-row image pair having a corresponding object spot row on the object. Three-dimensional (3D) coordinates of each object spot row are determined on the central values of the corresponding spot-row image pairs. The 3D coordinates of the object spot rows are stored.

Measurement of thickness of thermal barrier coatings using 3D imaging and surface subtraction methods for objects with complex geometries

Embodiments described herein relate to a non-destructive measurement device measurement device and a non-destructive measurement method for determining coating thickness of a three-dimensional (3D) object. In one embodiment, at least one first 3D image of an uncoated surface of the object and at least one second 3D image of a coated surface of the object are collected and analyzed to the determine the coating thickness of the object.

SYSTEMS AND METHODS FOR THREE DIMENSIONAL OBJECT SCANNING
20210118213 · 2021-04-22 ·

The embodiments describes herein relate generally to capturing a plurality of frames (i.e., image frames) of an object and utilizing those plurality of frames to render a 3D image of the object. The process to render a 3D image consists at least of two phases, a capturing phase and a reconstruction phase. During the capture phase a plurality of frames may be captured of an object and based upon these plurality of frames a 3D model of an object may be rendered by a computational inexpensive algorithm. By utilizing a computational inexpensive algorithm mobile devices may be able to successfully render 3D models of objects.