H04N13/271

Image sensor

An image sensor includes: first lines transferring a first clock having the same phase as that of modulated light in a first phase and a third clock having a phase difference of a ½ cycle from the phase of the modulated light in a second phase; second lines transferring the third clock in the first phase and the first clock in the second phase; third lines transferring a second clock having a phase difference of a ¼ cycle from the phase of the modulated light in the first phase and a fourth clock having a phase difference of a ¾ cycle from the phase of the modulated light in the second phase; fourth lines transferring the fourth clock in the first phase and the second clock in the second phase; and a pixel array including first pixels and second pixels that are alternately arranged in row and column directions.

Image sensor

An image sensor includes: first lines transferring a first clock having the same phase as that of modulated light in a first phase and a third clock having a phase difference of a ½ cycle from the phase of the modulated light in a second phase; second lines transferring the third clock in the first phase and the first clock in the second phase; third lines transferring a second clock having a phase difference of a ¼ cycle from the phase of the modulated light in the first phase and a fourth clock having a phase difference of a ¾ cycle from the phase of the modulated light in the second phase; fourth lines transferring the fourth clock in the first phase and the second clock in the second phase; and a pixel array including first pixels and second pixels that are alternately arranged in row and column directions.

Depth sensing method, 3D image generation method, 3D image sensor, and apparatus including the same

A three-dimensional (3D) image sensor module including: an oscillator configured to output a distortion-compensated oscillation frequency as a driving voltage of a sine wave biased with a bias voltage; an optical shutter configured to vary transmittance of reflective light reflected from a subject, according to the driving voltage, and to modulate the reflective light into at least two optical modulation signals having different phases; and an image generator configured to generate image data about the subject, the image data including depth information that is calculated based on a difference between the phases of the at least two optical modulation signals.

Generating enhanced three-dimensional object reconstruction models from sparse set of object images

Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.

REAL-TIME MAPPING OF PROJECTIONS ONTO MOVING 3D OBJECTS

A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.

REAL-TIME MAPPING OF PROJECTIONS ONTO MOVING 3D OBJECTS

A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.

HIGH DYNAMIC RANGE DEPTH GENERATION FOR 3D IMAGING SYSTEMS

High dynamic range depth generation is described for 3D imaging systems. One example includes receiving a first exposure of a scene having a first exposure level, determining a first depth map for the first depth exposure, receiving a second exposure of the scene having a second exposure level, determining a second depth map for the second depth exposure, and combining the first and second depth map to generate a combined depth map of the scene.

ELECTRO-OPTIC MODULATOR AND METHODS OF USING AND MANUFACTURING SAME FOR THREE-DIMENSIONAL IMAGING

Apparatuses, systems and methods for modulating returned light for acquisition of 3D data from a scene are described. A 3D imaging system includes a Fabry-Perot cavity having a first partially-reflective surface for receiving incident light and a second partially-reflective surface from which light exits. An electro-optic material is located within the Fabry-Perot cavity between the first and second partially-reflective surfaces. Transparent longitudinal electrodes or transverse electrodes produce an electric field within the electro-optic material. A voltage driver is configured to modulate, as a function of time, the electric field within the electro-optic material so that the incident light passing through the electro-optic material is modulated according to a modulation waveform. A light sensor receives modulated light that exits the second partially-reflective surface of the Fabry-Perot cavity and converts the light into electronic signals. Three-dimensional (3D) information regarding a scene-of-interest may be obtained from the electronic signals.

System for 3D Clothing Model Creation

A system for creating a model of an article of clothing or other wearable, the system includes a mannequin or other model of at least a portion of a human form; a sensing device configured to scan the mannequin without the wearable to generate a first scan information and configured to scan the surface of the wearable on the mannequin to generate a second scan information; a processor communicatively coupled to the sensing device to receive the first and second scan information, the processor configured to: generate point clouds using the scan information; aligning the point clouds; generating a plurality of slices along at least one longitudinal axis through the point clouds, each slice having a centroid along a corresponding longitudinal axis; and generating a table having a plurality of entries each representing a distance between corresponding vertices for the pair of point clouds; the table representing the wearable.

VIRTUAL CUES FOR AUGMENTED-REALITY POSE ALIGNMENT

A method includes determining a current pose of an augmented reality device in a physical space, and visually presenting, via a display of the augmented reality device, an augmented-reality view of the physical space including a predetermined pose cue indicating a predetermined pose in the physical space and a current pose cue indicating the current pose in the physical space.