G01B11/2545

Context-based depth sensor control
10038893 · 2018-07-31 · ·

An electronic device (100) includes a depth sensor (120), a first imaging camera (114, 116), and a controller (802). The depth sensor (120) includes a modulated light projector (119) to project a modulated light pattern (500). The first imaging camera (114, 116) is to capture at least a reflection of the modulated light pattern (500). The controller (802) is to selectively modify (1004) at least one of a frequency, an intensity, and a duration of projections of the modulated light pattern by the modulated light projector responsive to at least one trigger event (1002). The trigger event can include, for example, a change (1092) in ambient light incident on the electronic device, detection (1094) of motion of the electronic device, or a determination (1096) that the electronic device has encountered a previously-unencountered environment.

System and method for measuring three-dimensional surface features
10036631 · 2018-07-31 · ·

In some embodiments, a system for measuring surface features may include a patter projector, at least one digital imaging device, and an image processing device. The pattern projector may project, during use, a pattern of light on a surface of an object. In some embodiments, the pattern projector moves, during use, the pattern of light along the surface of the object. In some embodiments, the pattern projector moves the pattern of light in response to electronic control signals. At least one of the digital imaging devices may record, during use, at least one image of the projected patter of light. The image processing device which, during use, converts projected patterns of light recorded on at least one of the images to three-dimensional data points representing a surface geometry of the object using relative positions and relative angles between the at least one imaging device and the pattern projector.

IMAGE DEMOSAICING FOR HYBRID OPTICAL SENSOR ARRAYS

An imaging device comprises a hybrid optical sensor array including a first and second set of pixels that comprise different numbers of pixels. The first set of pixels is sensitive to infrared light, while the second set of pixels comprises three subsets of pixels sensitive to RGB light. A first set of data for a scene is captured by the first set of pixels and a second set of data for the scene is captured by a second set of pixels. The first and second sets of data are jointly demosaiced, such that the higher resolution data set is utilized to increase the resolution of a lower resolution data set. This allows for high-resolution infrared and RGB images to be produced for a scene without perspective or timing discrepancies inherent in multi-camera machine vision systems.

MULTI-LINE ARRAY LASER THREE-DIMENSIONAL SCANNING SYSTEM, AND MULTI-LINE ARRAY LASER THREE-DIMENSIONAL SCANNING METHOD
20180180408 · 2018-06-28 ·

The present invention provides a multi-line array laser three-dimensional scanning system and a multi-line array laser three-dimensional scanning method, the system performs precise synchronization and logic control of the multi-line array laser three-dimensional scanning system by a programmable gate array FPGA; employs a line laser array as the projection pattern light source, sends trigger signals to a stereoscopic image sensor, a inertial sensor and a line laser array by FPGA; wherein a upper computer receives image pairs taken by the stereoscopic image sensor, and codes, decodes as well as performs a three-dimensional reconstruction for the laser line array patterns in the image pairs, performs a three-dimensional reconstruction for the feature points on the surface of the measured object, and matches and aligns the three-dimensional feature points at different times; the system predicts and corrects the matching calculation by employing a hybrid sensing technology, which registers and stitches the time domain laser three-dimensional scanning data, meanwhile evaluates the error level in real time and feeds it back to an error feedback controller to obtain an adjustment instruction. Thereby the system performs a laser three-dimensional scanning with low cost, high efficiency, high reliability and high accuracy.

Flexible reference system

A method of optical data acquisition includes shaping a flexible reference system to be at least partially within a limited line of sight volume with respect to a workpiece.

Measurement system and method for measuring multi-dimensions

Measurement system and method for measuring multi-dimensions of an object are provided. A two-dimensional (2D) image capturing device captures at least one macro-2D image of the object. A three-dimensional (3D) information acquisition device acquires micro-3D measured data of the object. A integration and estimation device performs 2D and 3D image correction on macro-2D image and micro-3D measured data to map micro-3D measured data into macro-2D image to output 3D-topography data corresponding to macro-2D image of the object, and based on machine learning mechanism, performs matching procedure on at least one connection feature between any two positions in 3D-topography data with a database to elect an adapted model. Based on its corresponding to at least one fitting function, the integration and estimation device estimates the connection features of 3D-topography data to output at least one estimated feature amount, thereby obtaining measurement results corresponding to the object.

Depth Data Detection and Monitoring Apparatus
20180176544 · 2018-06-21 ·

A depth data detection apparatus and monitoring apparatus are disclosed. The depth data detection apparatus has at least two infrared light generators (11, 12) alternately operating, thereby ensuring that each of the infrared light generators has a sufficient power-off time while ensuring continuous operation of the system, so that each infrared light generator can reach its service lift as much as possible. Different infrared light generators can project infrared beams with different angles and/or from different positions, and the depth information obtained can be fused with each other in order to acquire the depth information of the object to be measured more completely. In addition, different infrared light generators can also project infrared beams to different areas or the same area of the space to be measured for their respective purposes.

SUPER-RESOLVING DEPTH MAP BY MOVING PATTERN PROJECTOR
20180173947 · 2018-06-21 ·

The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.

Object-point three-dimensional measuring system using multi-camera array, and measuring method

A system for measuring object points on a three-dimensional object using a planar array of a multi-camera group, and a measuring method, are provided. The system is useful in the field of optical measuring technologies. The method includes establishing a measuring system of at least one four-camera group wherein digital cameras form a 22 array; matching an image object point acquired by the camera group; upon matched object point image coordinates, calculating coordinates of spatial locations of respective object points; upon coordinates of the spatial locations, calculating other three-dimensional dimensions of the measured object to be specially measured to form three-dimensional point clouds and establish a three-dimensional point clouds graph for performing three-dimensional stereoscopic reproduction. Here, full matching is performed for all measured points of the measured object by directly translating, superimposing, and comparing, point-by-point, the pixel points of measured images in X and Y-axes directions. In this way a three-dimensional object may be reproduced.

Locating a feature for robotic guidance
10002431 · 2018-06-19 · ·

Aspects herein use a feature detection system to visually identify a feature on a component. The feature detection system includes at least two cameras that capture images of the feature from different angles or perspectives. From these images, the system generates a 3D point cloud of the components in the images. Instead of projecting the boundaries of features onto the point cloud directly, the aspects herein identify predefined geometric shapes in the 3D point cloud. The system then projects pixel locations of the feature's boundaries onto the identified geometric shapes in the point cloud. Doing so yields the 3D coordinates of the feature which then can be used by a robot to perform a manufacturing process.