G06V10/803

Continuous convolution and fusion in neural networks

Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

An image processing device including an image acquisition unit configured to acquire an image containing a subject via a lens unit; a distance information acquisition unit configured to acquire distance information indicating a distance to the subject; an auxiliary data generation unit configured to generate auxiliary data related to the distance information; a data stream generation unit configured to generate a data stream in which the image, the distance information, and the auxiliary data are superimposed; and an output unit configured to output the data stream to outside.

VEHICLE IDENTIFICATION DEVICE

An object herein is to provide a vehicle identification device by which the relation of a towing vehicle and a towed vehicle is accurately determined and stored. A vehicle identification device includes: a vehicle information acquisition unit for acquiring vehicle information indicative of positions, traveling directions and traveling speeds of multiple vehicles; a towing relation determination unit for extracting, from the vehicle information acquired by the vehicle information acquisition unit, a vehicle train that is a succession of vehicles, to thereby determine that a leading vehicle in the vehicle train is a towing vehicle and a portion of the vehicle train subsequent to the towing vehicle is at least one towed vehicle towed by the towing vehicle; and a towing relation storing unit for storing a towing relation represented by the towing vehicle and the at least one towed vehicle determined by the towing relation determination unit.

Event-assisted autofocus methods and apparatus implementing the same
11558542 · 2023-01-17 · ·

A focus method and an image sensing apparatus are disclosed. The method includes capturing, by a plurality of event sensing pixels, event data of a targeted scene, wherein the event data indicates which pixels of the event sensing pixels have changes in light intensity, accumulating the event data for a predetermined time interval to obtain accumulated event data, determining whether a scene change occurs in the targeted scene according to the accumulated event data, obtaining one or more interest regions in the targeted scene according to the accumulated event data in response to the scene change, and providing at least one of the one or more interest regions for a focus operation. The image sensing apparatus comprises a plurality of image sensing pixels, a plurality of event sensing pixels, and a controller configured to perform said method.

Method and Processing Unit for Processing Sensor Data of Several Different Sensors with an Artificial Neural Network in a Vehicle
20230009766 · 2023-01-12 ·

A method for operating a processing unit of a vehicle for processing sensor data of several different sensors with an artificial neural network, wherein a set of volume data cells is provided as a volumetric representation of different volume elements of an environment, and when sensor data is generated by the sensors the sensor data is transferred to the respective volume data cells using an inverse mapping function, wherein each inverse mapping function is a mapping of a respective sensor coordinate system of the sensor to an internal volumetric coordinate system corresponding to the world coordinate system, and by the transfer of the sensor data each volume data cell receives the sensor data that are associated with this volume data cell according to the inverse mapping function from each sensor, wherein the received sensor data from each sensor are accumulated in the respective volume data cell as combined data.

System and method for implementing reward based strategies for promoting exploration
11699062 · 2023-07-11 · ·

A system and method for implementing reward based strategies for promoting exploration that include receiving data associated with an agent environment of an ego agent and a target agent and receiving data associated with a dynamic operation of the ego agent and the target agent within the agent environment. The system and method also include implementing a reward function that is associated with exploration of at least one agent state within the agent environment. The system and method further include training a neural network with a novel unexplored agent state.

MULTI-CHANNEL OBJECT MATCHING
20230011829 · 2023-01-12 · ·

A method may include obtaining first sensor data captured by a first sensor system and second sensor data captured by a second sensor system of a different type from the first sensor system. The method may include detecting a first object included in the first sensor data and a second object included in the second sensor data. The method may include assigning a first label to the first object and a second label to the second object after comparing the first and the second sensor data. The first and second labels may indicate degrees to which the first and the second objects match. Responsive to the first and second labels indicating that the first and the second objects match, the method may include designating a matched object representative of the first object and the second object and sending the matched object to a downstream computing system of an autonomous vehicle.

METHOD AND A SYSTEM OF DETERMINING LIDAR DATA DEGRADATION DEGREE

A system and method for for determining a degree of point cloud data degradation of a LiDAR sensor of a Self-Driving Car (SDC) using a machine-learning algorithm (MLA) are provided. The method comprises: determining, based on a training point cloud generated by the LiDAR sensor representative of surroundings of the SDC, a plurality of LiDAR features; determining, for each training object in the surroundings, based on statistical data of coverage of training objects with LiDAR points, a plurality of enrichment features; receiving a respective label indicative of a degradation degree of the training point cloud; generating, based on the plurality of LiDAR features, the plurality of enrichment features, and the respective label, a given feature vector of a plurality of feature vectors; training, based on the plurality of feature vectors, the MLA to determine an in-use degree of degradation of in-use sensed data further generated by the LiDAR sensor.

Gate mate comprising detection passage system integrating temperature measurement and facial recognition

The present disclosure relates to thermal temperature measurement and facial recognition and discloses a gate mate comprising a detection passage system integrating temperature measurement and facial recognition, the gate mate comprises a face imaging camera lens, a thermopile sensor, a TOF (time-of-flight) optical ranging lens module, and an environmental temperature compensation module. The face imaging camera lens is used to detect and recognize human face and detect whether a front face of a measured person faces the face imaging camera lens. The thermopile sensor is used to detect a temperature of a forehead of the measured person and an environmental temperature. The TOF optical ranging lens module is used to detect a distance between the human face and the thermopile sensor. The environmental temperature compensation module is used to perform temperature compensation.

CAMERA-RADAR SENSOR FUSION USING LOCAL ATTENTION MECHANISM
20230213643 · 2023-07-06 ·

Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing sensor data. In one aspect, a method includes obtaining image data representing a camera sensor measurement of a scene; obtaining radar data representing a radar sensor measurement of the scene; generating a feature representation of the image data; generating a respective initial depth estimate for each of a subset of the plurality of pixels; generating a feature representation of the radar data; for each of the subset of the plurality of pixels, generating a respective adjusted depth estimate for the pixel using the initial depth estimate for the pixel and the radar feature vectors for a corresponding subset of the plurality of radar reflection points; generating a fused point cloud that includes a plurality of three-dimensional data points; and processing the fused point cloud to generate an output that characterizes the scene.