H04N13/271

Multi-baseline camera array system architectures for depth augmentation in VR/AR applications

Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.

Buried utility locator ground tracking apparatus, systems, and methods

Apparatus, systems, and methods are disclosed for utility locating with tracking of movement over the ground or other surfaces using a dodecahedral antenna array and a stereo-optical ground tracker having two or more spaced apart cameras and an associated processing element to detect ground features in images from the cameras and determine tracking parameters based on the position of the detected ground features.

Buried utility locator ground tracking apparatus, systems, and methods

Apparatus, systems, and methods are disclosed for utility locating with tracking of movement over the ground or other surfaces using a dodecahedral antenna array and a stereo-optical ground tracker having two or more spaced apart cameras and an associated processing element to detect ground features in images from the cameras and determine tracking parameters based on the position of the detected ground features.

Camera module and depth information extraction method therefor

A camera module according to one embodiment of the present invention comprises: a lighting unit for outputting an output light signal emitted at an object; a lens unit including an infrared (IR) filter and at least one sheet of a lens arranged on the IR filter, and condensing an input light signal reflected from the object; a tilting unit for shifting the optical path of the input light signal by controlling the tilt of the IR filter; an image sensor unit for generating an electric signal from the input light signal condensed by the lens unit and shifted by the tilting unit; an image control unit for extracting depth information of the object by using a phase difference between the output light signal and the input light signal received by the image sensor unit; and a detection unit for detecting tilt information of the IR filter and providing the tilt information of the IR filter to the image control unit.

Camera module and depth information extraction method therefor

A camera module according to one embodiment of the present invention comprises: a lighting unit for outputting an output light signal emitted at an object; a lens unit including an infrared (IR) filter and at least one sheet of a lens arranged on the IR filter, and condensing an input light signal reflected from the object; a tilting unit for shifting the optical path of the input light signal by controlling the tilt of the IR filter; an image sensor unit for generating an electric signal from the input light signal condensed by the lens unit and shifted by the tilting unit; an image control unit for extracting depth information of the object by using a phase difference between the output light signal and the input light signal received by the image sensor unit; and a detection unit for detecting tilt information of the IR filter and providing the tilt information of the IR filter to the image control unit.

Methods and Apparatus for Supporting Content Generation, Transmission and/or Playback
20220191452 · 2022-06-16 ·

Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.

Methods and Apparatus for Supporting Content Generation, Transmission and/or Playback
20220191452 · 2022-06-16 ·

Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.

SYSTEM AND APPARATUS FOR CO-REGISTRATION AND CORRELATION BETWEEN MULTI-MODAL IMAGERY AND METHOD FOR SAME
20220191395 · 2022-06-16 ·

The present disclosure provides an image capturing, device that captures images of a first sensor that includes a first imaging modality, a second sensor that includes a first imaging modality and a third sensor that includes a second imaging modality. A controller connected with the first sensor, the second sensor and the third sensor, wherein the controller registers an image captured by the first sensor or the second sensor to an image captured by the third sensor.

SYSTEM AND APPARATUS FOR CO-REGISTRATION AND CORRELATION BETWEEN MULTI-MODAL IMAGERY AND METHOD FOR SAME
20220191395 · 2022-06-16 ·

The present disclosure provides an image capturing, device that captures images of a first sensor that includes a first imaging modality, a second sensor that includes a first imaging modality and a third sensor that includes a second imaging modality. A controller connected with the first sensor, the second sensor and the third sensor, wherein the controller registers an image captured by the first sensor or the second sensor to an image captured by the third sensor.

System for hand pose detection

A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model.