G06T2207/30261

SYSTEMS AND METHODS FOR PREDICTING BLIND SPOT INCURSIONS
20230005374 · 2023-01-05 ·

Systems and methods are provided for predicting blind spot incursions for a host vehicle. In one implementation, a navigation system for a host vehicle may comprise a processor. The processor may be programmed to receive, from an image capture device located on a rear of the host vehicle, at least one image representative of an environment of the host vehicle. The processor may be programmed to analyze the at least one image to identify an object in the environment of the host vehicle and to determine kinematic information associated with the object. The processor may further be programmed to predict, based on the kinematic information, that the object will travel in a region outside of a field of view of the image capture device and perform a control action based on the prediction.

LIDAR POINT SELECTION USING IMAGE SEGMENTATION
20230005169 · 2023-01-05 ·

The subject disclosure relates to techniques for selecting points of an image for processing with LiDAR data. A process of the disclosed technology can include steps for receiving an image comprising a first image object and a second image object, processing the image to place a bounding box around the first image object and the second image object, and processing an image area within the bounding box to identify a first image mask corresponding with a first pixel region of the first image object and a second image mask corresponding with a second pixel region of the second image object. Systems and machine-readable media are also provided.

CROSS-MODALITY ACTIVE LEARNING FOR OBJECT DETECTION
20230005173 · 2023-01-05 ·

Among other things, techniques are described for cross-modality active learning for object detection. In an example, a first set of predicted bounding boxes and a second set of predicted bounding boxes is generated. The first set of predicted bounding boxes and the second set of predicted bounding boxes are projected into a same representation. The projections are filtered, wherein predicted bounding boxes satisfying a maximum confidence score are selected for inconsistency calculations. Inconsistencies are calculated across the projected bounding boxes based on filtering the projections. An informative scene is extracted based on the calculated inconsistencies. A first object detection neural network or a second object detection neural network is trained using the informative scenes.

MOBILE OBJECT CONTROL DEVICE, MOBILE OBJECT CONTROL METHOD, AND STORAGE MEDIUM
20230234578 · 2023-07-27 ·

A mobile object control device according to an embodiment includes a recognizer configured to recognize a surroundings situation of a mobile object, a trajectory predictor configured to predict future trajectories of the mobile object and an object likely to come into contact with the mobile object when there is the object around the mobile object, and an unavoidable contact determiner configured to determine whether contact between the mobile object and the object is unavoidable on the basis of the predicted trajectories of the mobile object and the object predicted by the trajectory predictor, and the trajectory predictor predicts the future trajectory of the object on the basis of a recognition state of a travel wheel of the object in the recognizer.

Systems and methods for navigating a vehicle among encroaching vehicles

Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.

System, devices and methods for tele-operated robotics

The system, devices and methods herein enable autonomous and tele-operation of tele-operated robots for maintenance of a property around known and unknown obstacles. A method may include using an unmanned aerial vehicle for obtaining additional data relating to the property and obstacles within the property and plan a path around the obstacles using data from sensors on-board the tele-operated robot and the aerial image. A method may also provide optimization of total time needed for performing the property maintenance and the labor costs in situations where manual intervention is needed for navigating the tele-operated robot around obstacles on the property or for removing obstacles on the property.

METHOD FOR DISPLAYING A SURROUNDINGS MODEL OF A VEHICLE, COMPUTER PROGRAM, ELECTRONIC CONTROL UNIT AND VEHICLE
20230025209 · 2023-01-26 ·

A method for displaying a surroundings model of a vehicle. The method includes: capturing at least one sequence of camera images of at least one section of the surroundings of the vehicle with the aid of at least one camera; detecting a position of the vehicle; storing at least one camera image of the surroundings of the vehicle, each stored camera image being assigned the detected position of the vehicle) at the moment the stored camera image was captured; determining distances between the vehicle and objects in the surroundings; generating at least one close-range projection surface which represents the close range around the vehicle, the close-range projection surface being deformed three-dimensionally depending on the determined distances; and displaying the surroundings model as a function of the generated close-range projection surface, at least one current camera image, a stored camera image and the present vehicle position.

IMAGE PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

An image processing device includes a reception interface and a processor. The reception interface receives image data corresponding to an image in which a subject is captured. The processor detects, based on the image data, a left shoulder feature point, a right shoulder feature point, and a face feature point of the person. The processor acquires a first value corresponding to a distance between the left shoulder feature point and the face feature point. The processor acquires a second value corresponding to a distance between the right shoulder feature point and the face feature point. The processor estimates presence or absence of a body twist of the person based on a ratio between the first value and the second value.

SYSTEMS AND METHODS FOR EFFICENTLY SENSING COLLISON THREATS

A system for efficiently sensing collision threats has an image sensor configured to capture an image of a scene external to a vehicle. The system is configured to then identify an area of the image that is associated with homogeneous sensor values and is thus likely devoid of collision threats. In order to reduce the computational processing required for detecting collision threats, the system culls the identified area from the image, thereby conserving the processing resources of the system.

VEHICLE USING FULL-VELOCITY DETERMINATION WITH RADAR

A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.