H04N13/271

THREE-DIMENSIONAL CAMERA SYSTEM
20230005170 · 2023-01-05 ·

A camera system. In some embodiments, the camera system includes a first laser, a camera, and a processing circuit connected to the first laser and to the camera. The first laser may be steerable, and the camera may include a pixel including a photodetector and a pixel circuit, the pixel circuit including a first time-measuring circuit.

Method and apparatus of adaptive infrared projection control

A processor or control circuit of an apparatus receives data of an image based on sensing by one or more image sensors. The processor or control circuit also detects a region of interest (ROI) in the image. The processor or control circuit then adaptively controls a light projector with respect to projecting light toward the ROI.

Multichannel, multi-polarization imaging for improved perception

In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a number of first filter types. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data based on a portion of the first image data associated with the filter type in common.

Multichannel, multi-polarization imaging for improved perception

In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a number of first filter types. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data based on a portion of the first image data associated with the filter type in common.

Time-of-flight image sensor resolution enhancement and increased data robustness using a binning module
11570424 · 2023-01-31 · ·

A time-of-flight (ToF) image sensor system includes a pixel array, where each pixel of the pixel array is configured to receive a reflected modulated light signal and to demodulate the reflected modulated light signal to generate an electrical signal; a plurality of analog-to-digital converters (ADCs), where each ADC is coupled to at least one assigned pixel of the pixel array and is configured to convert a corresponding electrical signal generated by the at least one assigned pixel into an actual pixel value; and a binning circuit coupled to the plurality of ADCs and configured to generate at least one interpolated pixel, where the binning circuit is configured to generate each of the at least one interpolated pixel based on actual pixel values corresponding to a different pair of adjacent pixels of the pixel array, each of the at least one interpolated pixel having a virtual pixel value.

Time-of-flight image sensor resolution enhancement and increased data robustness using a binning module
11570424 · 2023-01-31 · ·

A time-of-flight (ToF) image sensor system includes a pixel array, where each pixel of the pixel array is configured to receive a reflected modulated light signal and to demodulate the reflected modulated light signal to generate an electrical signal; a plurality of analog-to-digital converters (ADCs), where each ADC is coupled to at least one assigned pixel of the pixel array and is configured to convert a corresponding electrical signal generated by the at least one assigned pixel into an actual pixel value; and a binning circuit coupled to the plurality of ADCs and configured to generate at least one interpolated pixel, where the binning circuit is configured to generate each of the at least one interpolated pixel based on actual pixel values corresponding to a different pair of adjacent pixels of the pixel array, each of the at least one interpolated pixel having a virtual pixel value.

Selective power efficient three-dimensional imaging

An imaging method includes acquiring one or more passive light images of a scene. A region of interest in the scene is identified based on the one or more passive light images. One or more illumination zones of a plurality of illumination zones that collectively cover the region of interest is determined. Each illumination zone is sized according to active illumination emitted from a steerable illumination source. For a determined illumination zone of the one or more illumination zones, the illumination zone is individually illuminated with the active illumination from the steerable illumination source. For a pixel of a sensor array that maps to the illumination zone, a depth value of an object locus in the scene reflecting the active illumination back to the pixel is determined.

Selective power efficient three-dimensional imaging

An imaging method includes acquiring one or more passive light images of a scene. A region of interest in the scene is identified based on the one or more passive light images. One or more illumination zones of a plurality of illumination zones that collectively cover the region of interest is determined. Each illumination zone is sized according to active illumination emitted from a steerable illumination source. For a determined illumination zone of the one or more illumination zones, the illumination zone is individually illuminated with the active illumination from the steerable illumination source. For a pixel of a sensor array that maps to the illumination zone, a depth value of an object locus in the scene reflecting the active illumination back to the pixel is determined.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.