H04N13/271

Apparatus and method for generating a representation of a scene

An apparatus comprises a receiver (401) receiving a first image and associated first depth data captured by a first depth-sensing camera. A detector (405) detects an image position property for a fiducial marker in the first image, the fiducial marker representing a placement of a second depth sensing image camera. A placement processor (407) determines a relative placement vector indicative of a placement of the second depth sensing image camera relative to the first depth-sensing camera in response to the image position property and depth data of the first depth data for an image position of the fiducial marker. A second receiver (403) receives a second image and second first depth data captured by the second depth sensing image camera. A generator (409) generates the representation of at least part the scene in response to a combination of at least the first image and the second image based on the relative placement vector.

Apparatus and method for generating a representation of a scene

An apparatus comprises a receiver (401) receiving a first image and associated first depth data captured by a first depth-sensing camera. A detector (405) detects an image position property for a fiducial marker in the first image, the fiducial marker representing a placement of a second depth sensing image camera. A placement processor (407) determines a relative placement vector indicative of a placement of the second depth sensing image camera relative to the first depth-sensing camera in response to the image position property and depth data of the first depth data for an image position of the fiducial marker. A second receiver (403) receives a second image and second first depth data captured by the second depth sensing image camera. A generator (409) generates the representation of at least part the scene in response to a combination of at least the first image and the second image based on the relative placement vector.

Systems and methods for determining three dimensional measurements in telemedicine application
11310480 · 2022-04-19 · ·

A system and method for measuring a depth or length of area of interest a telemedicine patient, comprising: a first image capturing device that captures a two-dimensional (2D) image or video of a region of interest of a patient; a second image capturing device that generates a three-dimensional (3D) point cloud of the region of interest of the patient; a rendering system that processes a unified view for both the first and second image capturing devices where the 2D image and 3D point cloud are generated and registered; and a remote measurement processing system that determines a depth or length between two points selected from the 2D image of the region of interest by identifying associated points in the 3D point cloud and performing a measurement using the identified associated points in the 3D point cloud.

Systems and methods for determining three dimensional measurements in telemedicine application
11310480 · 2022-04-19 · ·

A system and method for measuring a depth or length of area of interest a telemedicine patient, comprising: a first image capturing device that captures a two-dimensional (2D) image or video of a region of interest of a patient; a second image capturing device that generates a three-dimensional (3D) point cloud of the region of interest of the patient; a rendering system that processes a unified view for both the first and second image capturing devices where the 2D image and 3D point cloud are generated and registered; and a remote measurement processing system that determines a depth or length between two points selected from the 2D image of the region of interest by identifying associated points in the 3D point cloud and performing a measurement using the identified associated points in the 3D point cloud.

METHODS FOR REDUCING POWER CONSUMPTION OF A 3D IMAGE CAPTURE SYSTEM
20220030175 · 2022-01-27 ·

A method for reducing power consumption of a 3D image capture system includes capturing 3D image data with the 3D image capture system while the 3D image capture system is in a first power state, detecting a power state change trigger, and switching from the first power state to a second power state based on the power state change trigger, wherein the 3D image capture system consumes less power in the second power state than in the first power state.

Real-time mapping of projections onto moving 3D objects

A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.

Real-time mapping of projections onto moving 3D objects

A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.

Detecting driving-relevant situations at a larger distance
11216672 · 2022-01-04 · ·

A method for detecting a relevant region in the surroundings of an ego vehicle, in which a situation exists which is relevant to the driving and/or safety of the ego vehicle, from measurement data of a sensor which observes at least a portion of the surroundings, the measurement data being discretized into pixels or voxels and/or are suitably represented in some other way, the existence of the relevant situation being dependent on the presence of at least one characteristic object in the surroundings, and the resolution of the pixels, voxels and/or the other representation being insufficient for directly detecting the characteristic object, the measurement data being analyzed for the presence of a grouping of objects which contains the characteristic object, the resolution of the pixels, voxels and/or the other representation being sufficient for detecting the grouping. A region in which the grouping is detected is classified as a relevant region.

Detecting driving-relevant situations at a larger distance
11216672 · 2022-01-04 · ·

A method for detecting a relevant region in the surroundings of an ego vehicle, in which a situation exists which is relevant to the driving and/or safety of the ego vehicle, from measurement data of a sensor which observes at least a portion of the surroundings, the measurement data being discretized into pixels or voxels and/or are suitably represented in some other way, the existence of the relevant situation being dependent on the presence of at least one characteristic object in the surroundings, and the resolution of the pixels, voxels and/or the other representation being insufficient for directly detecting the characteristic object, the measurement data being analyzed for the presence of a grouping of objects which contains the characteristic object, the resolution of the pixels, voxels and/or the other representation being sufficient for detecting the grouping. A region in which the grouping is detected is classified as a relevant region.

INFORMATION PROCESSING APPARATUS, IMAGE GENERATION METHOD, CONTROL METHOD, AND STORAGE MEDIUM

An information processing apparatus for a system generates a virtual viewpoint image based on image data obtained by performing imaging from a plurality of directions using a plurality of cameras. The information processing apparatus includes an obtaining unit configured to obtain a foreground image based on an object region including a predetermined object in a captured image for generating a virtual viewpoint image and a background image based on a region different from the object region in the captured image, wherein the obtained foreground image and the obtained background image having different frame rates, and an output unit configured to output the foreground image and the background image which are obtained by the obtaining unit and which are associated with each other.