H04N13/271

Time-Of-Flight camera system

The invention relates to a TOF camera system comprising several cameras, at least one of the cameras being a TOF camera, wherein the cameras are assembled on a common substrate and are imaging the same scene simultaneously and wherein at least two cameras are driven by different driving parameters.

Time-Of-Flight camera system

The invention relates to a TOF camera system comprising several cameras, at least one of the cameras being a TOF camera, wherein the cameras are assembled on a common substrate and are imaging the same scene simultaneously and wherein at least two cameras are driven by different driving parameters.

Apparatus and method for processing a depth map

An apparatus for processing a depth map comprises a receiver (203) receiving an input depth map. A first processor (205) generates a first processed depth map by processing pixels of the input depth map in a bottom to top direction. The processing of a first pixel comprises determining a depth value for the first pixel for the first processed depth map as the furthest backwards depth value of: a depth value for the first pixel in the input depth map, and a depth value determined in response to depth values in the first processed depth map for a first set of pixels being below the first pixel. The approach may improve the consistency of depth maps, and in particular for depth maps generated by combining different depth cues.

Apparatus and method for processing a depth map

An apparatus for processing a depth map comprises a receiver (203) receiving an input depth map. A first processor (205) generates a first processed depth map by processing pixels of the input depth map in a bottom to top direction. The processing of a first pixel comprises determining a depth value for the first pixel for the first processed depth map as the furthest backwards depth value of: a depth value for the first pixel in the input depth map, and a depth value determined in response to depth values in the first processed depth map for a first set of pixels being below the first pixel. The approach may improve the consistency of depth maps, and in particular for depth maps generated by combining different depth cues.

Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method

A device (10) for determining a state of attentiveness of a driver of a vehicle (1) is disclosed. The device includes an image capture unit (11) onboard said vehicle (1), said image capture unit (11) being suitable for capturing at least one image of a detection area (D) located in said vehicle (1), and an image processing unit (15) suitable for receiving said captured image and programmed to determine the state of attentiveness of the driver (4), according to the detection of the presence of a distracting object in one of the hands of the driver (4), which hand being located in the detection area (D).

Information processing apparatus, image generation method, control method, and storage medium

An information processing apparatus for a system generates a virtual viewpoint image based on image data obtained by performing imaging from a plurality of directions using a plurality of cameras. The information processing apparatus includes an obtaining unit configured to obtain a foreground image based on an object region including a predetermined object in a captured image for generating a virtual viewpoint image and a background image based on a region different from the object region in the captured image, wherein the obtained foreground image and the obtained background image having different frame rates, and an output unit configured to output the foreground image and the background image which are obtained by the obtaining unit and which are associated with each other.

Information processing apparatus, image generation method, control method, and storage medium

An information processing apparatus for a system generates a virtual viewpoint image based on image data obtained by performing imaging from a plurality of directions using a plurality of cameras. The information processing apparatus includes an obtaining unit configured to obtain a foreground image based on an object region including a predetermined object in a captured image for generating a virtual viewpoint image and a background image based on a region different from the object region in the captured image, wherein the obtained foreground image and the obtained background image having different frame rates, and an output unit configured to output the foreground image and the background image which are obtained by the obtaining unit and which are associated with each other.

Apparatus and methods for three-dimensional sensing

A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.

Apparatus and methods for three-dimensional sensing

A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.

Image sensors
11171173 · 2021-11-09 · ·

Image sensors are provided. Image sensors may include unit pixels arranged in a first direction and a second direction crossing the first direction. Each of the unit pixels may include first and second floating diffusion regions and first and second photo gate electrodes between the first and second floating diffusion regions. The unit pixels may include a first unit pixel, a second unit pixel, and a third unit pixel sequentially arranged. The first floating diffusion region of the second unit pixel may be between the first photo gate electrode of the first unit pixel and the first photo gate electrode of the second unit pixel, and the second floating diffusion region of the second unit pixel may be between the second photo gate electrode of the second unit pixel and the second photo gate electrode of the third unit pixel.