Non-uniform spatial resource allocation for depth mapping
11310479 · 2022-04-19
Assignee
Inventors
Cpc classification
G06F3/011
PHYSICS
G06T7/521
PHYSICS
H04N13/254
ELECTRICITY
H04N13/271
ELECTRICITY
International classification
H04N13/254
ELECTRICITY
G06F3/00
PHYSICS
H04N13/271
ELECTRICITY
Abstract
A method for depth mapping includes projecting a pattern of optical radiation into a volume of interest containing an object while varying an aspect of the pattern over the volume of interest. The optical radiation reflected from the object responsively to the pattern is sensed, and a depth map of the object is generated based on the sensed optical radiation.
Claims
1. A method for depth mapping, comprising: projecting a pattern of optical radiation into a volume of interest containing an object while modulating the pattern as a function of angle over the volume of interest; sensing a time of flight of the optical radiation reflected from the object responsively to the pattern; and generating a depth map of the object based on the sensed time of flight.
2. The method according to claim 1, wherein projecting the pattern comprises applying a varying spatial pattern to the projected optical radiation.
3. The method according to claim 1, wherein projecting the pattern comprises applying a varying temporal pattern to the projected pattern.
4. The method according to claim 1, wherein projecting the pattern comprises applying a combination of spatial and temporal patterns to the projected optical radiation.
5. The method according to claim 1, wherein modulating the pattern comprises varying an intensity of at least a component of the pattern as a function of the angle.
6. The method according to claim 1, wherein the pattern is modulated so as to optimize a three-dimensional (3D) resolution of the depth map.
7. Apparatus for depth mapping, comprising: a radiation source, which is configured to project a pattern of optical radiation into a volume of interest containing an object while modulating the pattern as a function of angle over the volume of interest; a sensor, which is configured to output a signal that is indicative, responsively to the pattern, of a time of flight of the optical radiation reflected from the object; and a processor, which is configured to process the signal in order to generate a depth map of the object.
8. The apparatus according to claim 7, wherein the radiation source is configured to apply a varying spatial pattern to the projected optical radiation.
9. The apparatus according to claim 7, wherein the radiation source is configured to apply a varying temporal pattern to the projected optical radiation.
10. The apparatus according to claim 7, wherein the radiation source is configured to apply a combination of spatial and temporal patterns to the projected optical radiation.
11. The apparatus according to claim 7, wherein the pattern is modulated by varying an intensity of at least a component of the pattern as a function of the angle.
12. The apparatus according to claim 7, wherein the pattern is modulated so as to optimize a three-dimensional (3D) resolution of the depth map.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
DETAILED DESCRIPTION OF EMBODIMENTS
(5) In creating a depth mapping system, the designer typically attempts to optimize the 3D resolution of the system, including both the effective number of pixels in the map and the number of depth gradations. The resolution is limited, however, by the available resources, including the resolution and signal/noise ratio of the image capture module and, in active depth mapping systems, the power and pattern definition of the illumination module. (The term “active” is used in the context of the present patent application to refer to depth mapping techniques in which a pattern of optical radiation is projected onto an object and an image of the patterned radiation reflected from the object is captured by an imaging device. The pattern may be a spatial pattern, as in patterned illumination imaging systems, or a temporal pattern, as in time-of-flight imaging systems, or it may contain a combination of spatial and temporal patterns.)
(6) Embodiments of the present invention that are described hereinbelow provide methods that may be used to optimize the performance of a depth mapping system by applying the resources of the system non-uniformly over the volume of interest that is mapped by the system. Some of these embodiments are based on the realization that the depth of the volume of interest varies with angle relative to the illumination and image capture modules. Thus, system performance may be optimized, relative to the available resources, by varying aspects of the illumination pattern or the optical resolution, or both, as a function of the angle, responsively to the varying depth.
(7)
(8) Computer 24 processes data generated by assembly 22 in order to reconstruct a depth map of the VOI containing users 28. In one embodiment, assembly 22 projects a pattern of spots onto the scene and captures an image of the projected pattern. Assembly 22 or computer 24 then computes the 3D coordinates of points in the scene (including points on the surface of the users' bodies) by triangulation, based on transverse shifts of the spots in the captured image relative to a reference image. This approach is advantageous in that it does not require the user to hold or wear any sort of beacon, sensor, or other marker. It gives the depth coordinates of points in the scene relative to a predetermined reference plane, at a certain distance from assembly 22. Methods and devices for this sort of triangulation-based 3D mapping using a projected pattern are described, for example, in PCT International Publications WO 2007/043036, WO 2007/105205 and WO 2008/120217, whose disclosures are incorporated herein by reference. Alternatively, system 20 may use other methods of 3D mapping, based on single or multiple cameras or other types of sensors, such as time-of-flight cameras, as are known in the art.
(9) Although computer 24 is shown in
(10) For simplicity and clarity in the description that follows, a set of Cartesian axes is marked in
(11)
(12) A depth imaging module 34 in assembly 22 captures images of the pattern reflected from the objects in VOI 46. Typically, the imaging module comprises objective optics 42, which focus light from a field of view (FOV) onto an image sensor 44. The image sensor may comprise, for example, a CMOS or CCD sensor, comprising an array of detector elements (not shown) that are sensitive to the optical radiation emitted by illumination module 30. Each of the detector elements outputs a signal corresponding to a pixel in the images of VOI 46.
(13) A processor, such as a microprocessor in assembly 22 (not shown) or in computer 24, receives the images from module 34 and compares the pattern in each image to a reference pattern stored in memory. The processor computes local shifts of parts of the pattern in the images captured by module 34 relative to the reference pattern and translates these shifts into depth coordinates. Details of this process are described, for example, in PCT International Publication WO 2010/004542, whose disclosure is incorporated herein by reference. Alternatively, as noted earlier, assembly 22 may be configured to generate depth maps by other means that are known in the art, such as stereoscopic imaging or time-of-flight measurements.
(14) Comparing
(15) The characteristic shape and dimensions of VOI 46 may be applied in optimizing the allocation of the resources of imaging assembly 22 over the VOI. Specifically, resources such as the available optical power of radiation source 38 and/or the available resolution of image sensor 44 may be allocated non-uniformly over the VOI. A number of examples of such allocations are described below.
(16)
(17) In an active depth mapping system, the brightness of radiation that reaches an object at distance d from illumination module 30 and is then reflected back and received by image capture module 34 drops in proportion to d.sup.4. Because the image area captured by each detector element in image sensor 44 grows as d.sup.2, however, the optical power received by the detector elements from an object at distance d is inversely proportional to d.sup.2. At any given angle, the required illumination intensity of the pattern projected by module 30 to provide a given minimum optical signal level at image sensor 44 is determined by the maximum depth of the volume of interest at that angle, as illustrated by the solid curve in
(18) For this reason, the intensity distribution of illumination module 30 may be modulated so as to concentrate more optical radiation energy in the center of the volume of interest, at low angles, and less at higher angles, as illustrated by the dashed curve in
(19)
(20) Although image 50 shows a characteristic fish-eye type of distortion, optics 42 may alternatively be designed, using methods of optical design that are known in the art, to give a distortion that is more precisely tailored to the maximum distance as a function of angle (as shown in
(21) The sort of optical distortion that is introduced by objective optics 42 in the embodiment illustrated by
(22) In some of the embodiments that are described in these publications, a compound DOE comprises one DOE that applies a pattern to the input beam and another DOE that splits the input beam into a matrix of output beams so as to tile a region in space with multiple adjacent instances of the pattern. Such DOEs typically create pincushion distortion in the projected pattern, due to the fact that the diffraction orders are evenly spaced in terms of the sine of their diffraction angles, so that the angular distance between the projected orders grows with the order number. Furthermore, give the form of VOI 46, it is desirable that illumination module 30 vary the density of the pattern such that the density at a given projection angle is positively correlated with the farthest distance in the VOI at that angle. This criterion for optical resource allocation applies not only to DOE-based projectors, but also to other pattern projectors.
(23) Objective optics 42 in the embodiment of
(24) Although the above embodiments present a number of specific ways in which the shape of the volume of interest can be used in enhancing the design and operation of system 20, other techniques for non-uniform resource allocation based on the shape of the volume of interest will be apparent to persons of skill in the art after reading the above description and are considered to be within the scope of the present invention. For example, pattern-based depth mapping systems sometimes suffer from problems of “phase wrapping,” as pattern shifts repeat themselves periodically with increasing depth, and computational resources must be invested in “unwrapping” the phase in order to disambiguate depth values. The known maximal depth of the volume of interest as a function of angle can be used to eliminate depth values that are out of range and thus simplify the task of disambiguation.
(25) It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.