H04N13/271

System and method for updating an autonomous vehicle driving model based on the vehicle driving model becoming statistically incorrect
11573569 · 2023-02-07 · ·

Systems and methods for implementing one or more autonomous features for autonomous and semi-autonomous control of one or more vehicles are provided. More specifically, image data may be obtained from an image acquisition device and processed utilizing one or more machine learning models to identify, track, and extract one or more features of the image utilized in decision making processes for providing steering angle and/or acceleration/deceleration input to one or more vehicle controllers. In some instances, techniques may be employed such that the autonomous and semi-autonomous control of a vehicle may change between vehicle follow and lane follow modes. In some instances, at least a portion of the machine learning model may be updated based on one or more conditions.

Sensor misalignment compensation

Camera compensation methods and systems that compensate for misalignment of sensors/camera in stereoscopic camera systems. The compensation includes identifying a pitch angle offset between a first camera and a second camera, determining misalignment of the first and second cameras from the identified pitch angle offset, determining a relative compensation delay responsive to the determined misalignment, introducing the relative compensation delay to image streams produced by the cameras, and producing a stereoscopic image on a display from the first and second image streams with the introduced delay.

Techniques for improving mesh accuracy using labeled inputs

A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.

High resolution infrared image generation using image data from an RGB-IR sensor and visible light interpolation
11574484 · 2023-02-07 · ·

An apparatus includes a memory and a processor circuit. The memory may be configured to store one or more frames of image pixel data. Each of the frames generally comprises red (R) samples, green (G) samples, blue (B) samples, and infrared (IR) samples. The processor circuit may be configured to generate an infrared image for each frame. The infrared image generally has a number of infrared (IR) pixels greater than the number of the infrared (IR) samples of each frame. The processor circuit generally performs interpolation utilizing the infrared (IR) samples and one or more of the red (R) samples, the green (G) samples, and the blue (B) samples of each frame in generating the infrared image for each frame.

DIGITAL PHOTOGRAPHING APPARATUS INCLUDING A PLURALITY OF OPTICAL SYSTEMS FOR ACQUIRING IMAGES UNDER DIFFERENT CONDITIONS AND METHOD OF OPERATING THE SAME
20230037167 · 2023-02-02 ·

A digital photographing apparatus is provided. The digital photographing apparatus includes a first optical system configured to acquire a wide-angle image including a subject, a second optical system configured to acquire a telephoto image with the subject zoomed, and a processor configured to determine whether to generate a synthesized image of the wide-angle image and the telephoto image based on one or more of an illuminance of the subject and a distance between the digital photographing apparatus and the subject.

Method and System for Encoding a 3D Scene
20220353486 · 2022-11-03 ·

A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.

Method and System for Encoding a 3D Scene
20220353486 · 2022-11-03 ·

A computer-implemented method for encoding a scene volume includes: (a) identifying features of a scene volume that are within a camera perspective range with respect to a default camera perspective; (b) converting the identified features into rendered features; and (c) sorting the rendered features into a plurality of scene layers, each including corresponding depth, color, and transparency maps for the respective rendered features. Further, (a), (b), and (c) may be repeated, operating on temporally ordered scene volumes, to produce and output a sequence encoding a video. Corresponding systems and non-transitory computer-readable media are disclosed for encoding a 3D scene and for decoding an encoded 3D scene. Efficient compression, transmission, and playback of video describing a 3D scene can be enabled, including for virtual reality displays with updates based on a changing perspective of a user viewer for variable-perspective playback.

Multi-dimensional data capture of an environment using plural devices
11490069 · 2022-11-01 · ·

Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.

Multi-dimensional data capture of an environment using plural devices
11490069 · 2022-11-01 · ·

Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.

Systems and methods for automatically calibrating multiscopic image capture systems
11496722 · 2022-11-08 · ·

A method includes receiving, from a multiscopic image capture system, a plurality of images depicting a scene. The method includes determining, by application of a neural network based on the plurality of images, a disparity map of the scene. The neural network includes a plurality of layers, and the layers include a rectification layer. The method include determining a matching error of the disparity map based on differences between corresponding pixels of two or more images associated with the disparity map. The method includes back-propagating the matching error to the rectification layer of the neural network. Back-propagating the matching error includes updating one or more weights applied to the rectification layer.