Patent classifications
H04N13/271
Image sensors and sensing methods to obtain time-of-flight and phase detection information
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
Image sensors and sensing methods to obtain time-of-flight and phase detection information
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
Methods, systems, and media for generating an immersive light field video with a layered mesh representation
Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
Methods, systems, and media for generating an immersive light field video with a layered mesh representation
Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.
CAMERA SYSTEM IN SITUATION BUILT-IN-TEST
An autonomous or semi-autonomous vehicle camera system including a camera having a field of view, wherein the camera is operable to receive optical information in the field of view. A display located in the camera field of view. A controller in electrical connection with the camera, wherein the controller is operable to conduct a Built-in-Test. Wherein the Built-in-Test is configured to present one or more images in the camera field of view via the display to determine functionality of the camera system.
CAMERA SYSTEM IN SITUATION BUILT-IN-TEST
An autonomous or semi-autonomous vehicle camera system including a camera having a field of view, wherein the camera is operable to receive optical information in the field of view. A display located in the camera field of view. A controller in electrical connection with the camera, wherein the controller is operable to conduct a Built-in-Test. Wherein the Built-in-Test is configured to present one or more images in the camera field of view via the display to determine functionality of the camera system.
ACTIVE STEREO MATCHING FOR DEPTH APPLICATIONS
A head-mounted device (HMD) is configured to perform depth detection with a stereo camera pair comprising a first camera and a second camera, both of which are configured to detect/capture visible light and IR light. The fields of view for both of the cameras overlap to form an overlapping field of view. The HMD also includes an IR dot-pattern illuminator that is mounted on the HMD with the cameras and that is configured to emit an IR dot-pattern illumination. The IR dot-pattern illuminator emits a dot-pattern illumination that spans at least a part of the overlapping field of view. The IR dot-pattern illumination adds texture to objects in the environment and enables the HMD to determine depth for those objects, even if they have textureless/smooth surfaces.
ACTIVE STEREO MATCHING FOR DEPTH APPLICATIONS
A head-mounted device (HMD) is configured to perform depth detection with a stereo camera pair comprising a first camera and a second camera, both of which are configured to detect/capture visible light and IR light. The fields of view for both of the cameras overlap to form an overlapping field of view. The HMD also includes an IR dot-pattern illuminator that is mounted on the HMD with the cameras and that is configured to emit an IR dot-pattern illumination. The IR dot-pattern illuminator emits a dot-pattern illumination that spans at least a part of the overlapping field of view. The IR dot-pattern illumination adds texture to objects in the environment and enables the HMD to determine depth for those objects, even if they have textureless/smooth surfaces.
DETERMINING RELATIVE POSITION AND ORIENTATION OF CAMERAS USING HARDWARE
Techniques for performing a hardware-based image alignment process are disclosed. A relative position and orientation between a system camera and a detached external camera are determined. This process is performed using 6 degree of freedom (DOF) trackers on the system camera and on the external camera. A depth measurement, which indicates a distance between the external camera and a scene where the external camera is aimed, is obtained. The system camera generates a system camera image, and the external camera generates an image. An overlaid image is generated by using the relative position and orientation and the depth measurement to reproject the second content onto the first content.
DETERMINING RELATIVE POSITION AND ORIENTATION OF CAMERAS USING HARDWARE
Techniques for performing a hardware-based image alignment process are disclosed. A relative position and orientation between a system camera and a detached external camera are determined. This process is performed using 6 degree of freedom (DOF) trackers on the system camera and on the external camera. A depth measurement, which indicates a distance between the external camera and a scene where the external camera is aimed, is obtained. The system camera generates a system camera image, and the external camera generates an image. An overlaid image is generated by using the relative position and orientation and the depth measurement to reproject the second content onto the first content.