G01B11/2545

Method for assembling a projecting apparatus
10767984 · 2020-09-08 · ·

A method is provided for assembling a 3D sensing apparatus that comprises at least two projectors, wherein the assembling of the apparatus is carried out by ensuring that a pattern formed from a combination of images projected by each of the at least two projectors, is not formed along an epi-polar line or part thereof more than once. The method comprises the steps of: placing the at least two projectors at initial approximate physical positions within the 3D sensing apparatus; and, placing one or more projectors' protectors on top of the at least two projectors, thereby changing the projectors initial positions and automatically positioning them accurately in their pre-defined position and orientation by using the one or more projectors' protectors.

Hybrid depth detection and movement detection

A head-mounted device (HMD) is configured to perform depth detection in conjunction with movement tracking. The HMD includes a stereo camera pair comprising a first camera and a second camera, both of which are mounted on the HMD. The fields of view for both of the cameras overlap to form an overlapping field of view. These cameras are configured to detect both visible light and infrared (IR) light. The HMD also includes an IR dot-pattern illuminator that is configured to emit an IR dot-pattern illumination. The HMD uses the IR dot-pattern illumination to determine an object's depth. The HMD also includes one or more flood IR light illuminators that emit a flood of IR light. The HMD uses the flood of IR light to track at least its own movements, and sometimes even hand movements, in various environments, even low light environments.

Three-dimensional computer vision based on projected pattern of laser dots and geometric pattern matching

In one embodiment, a method comprises generating, by a computing device from first and second images of a projected pattern of laser dots detected by respective camera devices in a physical environment, a stereoscopic two-dimensional (2D) object pair based on determining 2D positions for each of the laser dots detected in the first and second images, creating a first mesh of geometric patterns from the 2D positions in the first image and a corresponding second mesh of the geometric patterns from the 2D positions in the second image, and creating the stereoscopic 2D object pair based on matching corresponding geometric patterns from the first and second meshes of the geometric patterns. A three-dimensional model (3D) of the physical environment is generated based on executing stereoscopic triangulation of the stereoscopic 2D object pair. The 3D model causes a controllable device to interact with the physical environment.

Shape measuring device and shape measuring method
10746539 · 2020-08-18 · ·

A shape measuring device includes an optical-axis-direction driving section configured to minutely relatively displace a stage in an optical axis direction of a light receiving section with respect to a light projecting section and the light receiving section such that a phase of a projection pattern on the stage is shifted at a minute pitch finer than width of the phase of the projection pattern projected on the stage, the minute pitch being defined by controlling light projecting elements of a pattern generating section.

Image registration and augmented reality system and method augmented reality thereof

Disclosed is an image registration and an augmented reality system and an augmented reality method thereof which is suitable for solving the problem of spatial localization of the temporomandibular joint (TMJ) in arthroscopic surgery. The system comprises a three-dimensional scanning model building device, a stereoscopic image photographing device, a projection device and an arithmetic unit. The three-dimensional scanning model was constructed by preoperative or intraoperative imaging of the patient, and the surface three-dimensional model constructed by the stereoscopic image photographing device was spatially aligned to remove the surface (skin layer) of the three-dimensional image to display the TMJ image. Through the calibration of the stereoscopic image photographing device and the projection device, accurate, three-dimensional TMJ image location information is projected onto the patient's body to achieve the purpose.

CONTOUR RECOGNITION DEVICE, CONTOUR RECOGNITION SYSTEM AND CONTOUR RECOGNITION METHOD
20200257923 · 2020-08-13 ·

A contour recognition device includes: a projecting unit which projects pattern light; a light quantity adjustment unit which adjusts a light quantity of the pattern light; two photographing units which capture, from different viewpoints, the target and the placement surface; a distance calculation unit which calculates a distance at every two-dimensional position; an image generation unit which generates a distance image which gradient expresses a distance from a maximum distance until a minimum distance at every two-dimensional position; and a contour extraction unit which extracts a contour of the target, in which the light quantity adjustment unit adjusts the light quantity so that the distance of at least the contour of the target cannot be calculated, and the distance of the placement surface can be calculated, and the image generation unit generates an image so that a maximum distance becomes a distance greater than the distance of the placement surface.

System for capturing an image

A system includes: an illumination device; and an imaging device configured to capture an image of a target which is irradiated with light by the illumination device. The illumination device includes: a light emitting unit configured to emit first polarized light; a condensing unit configured to focus light emitted from the light emitting unit; a diffusion unit configured to diffuse the light focused by the condensing unit; and a uniformization optical system configured to receive the light diffused by the diffusion unit, uniformize an illuminance distribution of the light, and emit the light. The system further including a selective transmission unit provided on an optical path from the target to an imaging element of the imaging device and configured to block the first polarized light at a predetermined blocking ratio.

SURFACE RECONSTRUCTION OF AN ILLUMINATED OBJECT BY MEANS OF PHOTOMETRIC STEREO ANALYSIS

A method for surface reconstruction may include illuminating at least one object simultaneously by light emitted by a plurality of luminaires spaced apart from another. The method may further include recording a photographic sequence comprising a plurality of individual images of the object(s). The method may further include reconstructing at least one visible object surface of the object by photometric stereo analysis.

THREE-DIMENSIONAL TRIANGULATIONAL SCANNER WITH BACKGROUND LIGHT CANCELLATION

A triangulation scanner system and method of operation is provided. The system includes a projector that alternately projects a pattern of light and no light during first and second time intervals. A camera includes a lens and a circuit with a photosensitive array. The camera captures an image of an object. The photosensitive array has a plurality of pixels including a first pixel. The first pixel including an optical detector and a first and second accumulator. The optical detector produces signals in response to a light levels reflected from a point on the object. The first accumulator sums the signals during the first time intervals to obtain a first summed signal. The second accumulator sums the signals during the second time intervals obtain a second summed signal. A processor determines 3D coordinates of the point based on the projected pattern of light and on the first and second summed signals.

Information processing device and information processing method

The present disclosure relates to an information processing device and an information processing method that are capable of estimating the self-position by accurately and continuously estimating the self-movement. The information processing device according to an aspect of the present disclosure includes a downward imaging section and a movement estimation section. The downward imaging section is disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface. The movement estimation section estimates the movement of the moving object in accordance with a plurality of images representing the road surface and captured at different time points by the downward imaging section. The present disclosure can be applied, for example, to a position sensor mounted in an automobile.