H04N13/271

3D sensor and method of monitoring a monitored zone
11512940 · 2022-11-29 · ·

A 3D sensor for monitoring a monitored zone is provided, wherein the 3D sensor has at least one light receiver for generating a received signal from received light from the monitored zone and has a control and evaluation unit that is configured to detect objects in the monitored zone by evaluating the received signal and to determine the shortest distance of the detected objects from at least one reference volume, and to read at least one distance calculated in advance from the reference value from a memory for the determination of the respective shortest distance of a detected object.

3D sensor and method of monitoring a monitored zone
11512940 · 2022-11-29 · ·

A 3D sensor for monitoring a monitored zone is provided, wherein the 3D sensor has at least one light receiver for generating a received signal from received light from the monitored zone and has a control and evaluation unit that is configured to detect objects in the monitored zone by evaluating the received signal and to determine the shortest distance of the detected objects from at least one reference volume, and to read at least one distance calculated in advance from the reference value from a memory for the determination of the respective shortest distance of a detected object.

Under-display image sensor
11516374 · 2022-11-29 · ·

A device includes a display and a first light source configured to emit light, wherein the first light source is proximate to the display. The device further includes a first camera disposed behind the display, wherein the first camera is configured to detect reflections of the light emitted by the first light source. The first camera is further configured to capture a first image based at least in part on the reflections, wherein the reflections are partially occluded by the display. The device also includes a second camera proximate to the display, wherein the second camera is configured to capture a second image. In addition, the device includes a depth map generator configured to generate depth information about one or more objects in a field-of-view (FOV) of the first and second cameras based at least in part on the first and second images.

Composite imaging systems using a focal plane array with in-pixel analog storage elements
11514594 · 2022-11-29 · ·

Various embodiments of a 3D+imaging system include a focal plane array with in-pixel analog storage elements. In embodiments, an analog pixel circuit is disclosed for use with an array of photodetectors for a sub-frame composite imaging system. In embodiments, a composite imaging system is capable of determining per-pixel depth, white point and black point for a sensor and/or a scene that is stationary or in motion. Examples of applications for the 3D+imaging system include advanced imaging for vehicles, as well as for industrial and smart phone imaging. an extended dynamic range imaging technique is used in imaging to reproduce a greater dynamic range of luminosity.

Head mounted display

A head mounted display (HMD) is provided. The HMD includes a housing and a view port of the housing. The view port has a screen for rendering an augmented reality scene. Included is a communications device for exchanging streaming data over a network. A depth camera integrated in the housing and oriented to capture depth data of an environment in front of the housing is included. A processor is configured to use the depth data captured by the depth camera to identify spatial positioning of real objects in the environment. A real object is rendered into the augmented reality scene, and the real object is tracked such that insertion of augmented reality objects are placed in coordination with movements of the real object shown in the augmented reality scene. The real object captured by the depth camera is the environment where a user wearing the HMD is located.

Head mounted display

A head mounted display (HMD) is provided. The HMD includes a housing and a view port of the housing. The view port has a screen for rendering an augmented reality scene. Included is a communications device for exchanging streaming data over a network. A depth camera integrated in the housing and oriented to capture depth data of an environment in front of the housing is included. A processor is configured to use the depth data captured by the depth camera to identify spatial positioning of real objects in the environment. A real object is rendered into the augmented reality scene, and the real object is tracked such that insertion of augmented reality objects are placed in coordination with movements of the real object shown in the augmented reality scene. The real object captured by the depth camera is the environment where a user wearing the HMD is located.

Head mounted display and method

A system for displaying a mobile device screen comprises a head mounted display for displaying a first content to a user, a video camera mounted on the head mounted display, the video camera operable to capture a video image of a scene in front of the user, a region detection processor operable to detect a region of the captured video image comprising a mobile device screen, and an image processor operable to replace a corresponding region of the displayed first content in the head mounted display with the detected region of the captured video image comprising the mobile device screen.

Head mounted display and method

A system for displaying a mobile device screen comprises a head mounted display for displaying a first content to a user, a video camera mounted on the head mounted display, the video camera operable to capture a video image of a scene in front of the user, a region detection processor operable to detect a region of the captured video image comprising a mobile device screen, and an image processor operable to replace a corresponding region of the displayed first content in the head mounted display with the detected region of the captured video image comprising the mobile device screen.

Methods and systems for creating virtual and augmented reality

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

Methods and systems for creating virtual and augmented reality

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.