H04N13/271

Imaging processing apparatus and method extracting a second RGB ToF feature points having a correlation between the first RGB and TOF feature points

An image processing apparatus and method of extracting a second RGB feature point and a second ToF feature point such that a correlation between the first RGB feature point and the first ToF feature point is equal to or greater than a predetermined value; calculating an error value between the second RGB feature point and the second ToF feature point; updating pre-stored calibration data when the error value is greater than a threshold value, and calibrating the RGB image and the ToF image by using the updated calibration data; and synthesizing the calibrated RGB and ToF images.

Multi-aperture imaging device with a wavelength-specific beam deflector and device having such a multi-aperture imaging device

A multi-aperture imaging device is provided that includes an image sensor and an array of adjacently arranged optical channels. Each optical channel includes an optic for imaging at least one partial field of view of a total field of view onto an image sensor area of the image sensor. The device has a beam-deflector for deflecting an optical path of the optical channels and the beam-deflector includes a first beam-deflecting area operative for a first wavelength range of electromagnetic radiation passing through the optical channel; and a includes second beam-deflecting area operative for a second wavelength range of the electromagnetic radiation passing through the optical channels. The second wavelength range is different from the first wavelength range.

Multi-aperture imaging device with a wavelength-specific beam deflector and device having such a multi-aperture imaging device

A multi-aperture imaging device is provided that includes an image sensor and an array of adjacently arranged optical channels. Each optical channel includes an optic for imaging at least one partial field of view of a total field of view onto an image sensor area of the image sensor. The device has a beam-deflector for deflecting an optical path of the optical channels and the beam-deflector includes a first beam-deflecting area operative for a first wavelength range of electromagnetic radiation passing through the optical channel; and a includes second beam-deflecting area operative for a second wavelength range of the electromagnetic radiation passing through the optical channels. The second wavelength range is different from the first wavelength range.

GENERATING ENHANCED THREE-DIMENSIONAL OBJECT RECONSTRUCTION MODELS FROM SPARSE SET OF OBJECT IMAGES

Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

SYSTEM AND METHOD FOR DETECTING A CONDITION PROMPTING AN UPDATE TO AN AUTONOMOUS VEHICLE DRIVING MODEL
20230084316 · 2023-03-16 ·

Systems and methods for implementing one or more autonomous features for autonomous and semi-autonomous control of one or more vehicles are provided. More specifically, image data may be obtained from an image acquisition device and processed utilizing one or more machine learning models to identify, track, and extract one or more features of the image utilized in decision making processes for providing steering angle and/or acceleration/deceleration input to one or more vehicle controllers. In some instances, techniques may be employed such that the autonomous and semi-autonomous control of a vehicle may change between vehicle follow and lane follow modes. In some instances, at least a portion of the machine learning model may be updated based on one or more conditions.

Device for collecting photos of field surface feature and information measurement and calculation method

The present invention provides a device for collecting photos of a field surface feature and an information measurement and calculation method. The photo collection device includes a motion camera, a pan-tilt, and a movable carrier. The motion camera is fixed to the movable carrier by using the pan-tilt, and when the movable carrier is in a driving process, the motion camera regularly takes a clear and measurable surface feature photo, to obtain a set of continuous surface feature photos with geographical coordinates. The device for collecting photos in the present invention is portable, is easily assembled, and can stably and continuously take clear and measurable photos, to resolve a problem that a photo taken in a high-speed motion state is fuzzy. The device can be applied to remote sensing of large-scale field research.

Method, system and computer program product for emulating depth data of a three-dimensional camera device

A method, system and computer program product for emulating depth data of a three-dimensional camera device is disclosed. The method includes concurrently operating the radar device and the 3D camera device to generate training radar data and training depth data respectively. Each of the radar device and the 3D camera device has a respective field of view. The field of view of the radar device overlaps the field of view of the 3D camera device. The method also includes inputting the training radar and depth data to the neural network. The method also includes employing the training radar and depth data to train the neural network. Once trained, the neural network is configured to receive real radar data as input and to output substitute depth data.

Method, system and computer program product for emulating depth data of a three-dimensional camera device

A method, system and computer program product for emulating depth data of a three-dimensional camera device is disclosed. The method includes concurrently operating the radar device and the 3D camera device to generate training radar data and training depth data respectively. Each of the radar device and the 3D camera device has a respective field of view. The field of view of the radar device overlaps the field of view of the 3D camera device. The method also includes inputting the training radar and depth data to the neural network. The method also includes employing the training radar and depth data to train the neural network. Once trained, the neural network is configured to receive real radar data as input and to output substitute depth data.