Patent classifications
H04N13/271
Determining a predicted head pose time
Techniques for calculating a predicted head pose time for a display device are described herein. A request to start a frame is generated. A target finish time stamp associated with rendering the frame is calculated. A number of VSync periods from a last VSync to a target VSync is determined based on the target finish time stamp. A target VSync time stamp is calculated based on the number of VSync periods from the last VSync to the target VSync. The target VSync time stamp is compared to the target finish time stamp. The predicted head pose time is calculated based on the target VSync time stamp and a fixed platform offset.
3D display system for camera monitoring system
A system in a vehicle for generating and displaying three-dimensional images may comprise a first imager having a first field of view; a second imager having a second field of view at least partially overlapping the first field of view, the second imager disposed a distance from first imager; and an image signal processor in communication with the first and second imagers; wherein the image signal processor is configured to generate an image having a three-dimensional appearance from the data from the first and second imagers. The first and second imagers may be disposed on a vehicle. The first and second imagers may be configured to capture a scene; and the scene may be exterior to the vehicle.
3D display system for camera monitoring system
A system in a vehicle for generating and displaying three-dimensional images may comprise a first imager having a first field of view; a second imager having a second field of view at least partially overlapping the first field of view, the second imager disposed a distance from first imager; and an image signal processor in communication with the first and second imagers; wherein the image signal processor is configured to generate an image having a three-dimensional appearance from the data from the first and second imagers. The first and second imagers may be disposed on a vehicle. The first and second imagers may be configured to capture a scene; and the scene may be exterior to the vehicle.
Input parameter based image waves
A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.
Input parameter based image waves
A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.
System and methods or augmenting surfaces within spaces with projected light
One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
System and methods or augmenting surfaces within spaces with projected light
One variation of a method for augmenting surfaces within spaces with projected light includes: at a projector system during a first time period, projecting visual content onto nearby surfaces via a light projector integrated into the projector system and capturing a first scan of nearby surfaces, illuminated by the light projector, via an optical sensor integrated into the projector system; identifying a first space occupied by the projector system during the first time period based on features detected in the first scan; selecting a first augmented content source, from a first set of augmented content sources affiliated with the first space, associated with a first surface in the first space; articulating the light projector to locate the first surface in a field of view of the light projector; accessing a frame from the first augmented content source; and projecting the frame onto the first surface via the light projector.
Method And Apparatus Of Adaptive Infrared Projection Control
Various examples with respect to adaptive infrared (IR) projection control for depth estimation in computer vision are described. A processor or control circuit of an apparatus receives data of an image based on sensing by one or more image sensors. The processor or control circuit also detects a region of interest (ROI) in the image. The processor or control circuit then adaptively controls a light projector with respect to projecting light toward the ROI.
MEDICAL OBSERVATION SYSTEM, MEDICAL OBSERVATION APPARATUS, AND MEDICAL OBSERVATION METHOD
A three-dimensional information generation unit generates a three-dimensional map (D(X, Y, Z)) of a surgical field from a surgical field image (K(x, y)) captured by an imaging apparatus. A region-of-interest setting unit (setting unit) sets at least one region of interest in a surgical field image that is captured at a predetermined timing. A detection region estimation unit estimates a relative position corresponding to the region of interest in a surgical field image that is captured at a certain timing different from the predetermined timing, on the basis of the three-dimensional map information on the region of interest. A parameter control unit adjusts imaging parameters of the imaging apparatus and a light source apparatus on the basis of three-dimensional information and pixel values of a surgical field image corresponding to the relative position of the region of interest estimated by the estimation unit, captures a surgical field image.
METHOD AND APPARATUS FOR DEPTH-MAP ESTIMATION OF A SCENE
The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.