Patent classifications
G01S7/51
Laser scanner with real-time, online ego-motion estimation
A method comprises accessing a data set comprising a LIDAR acquired point cloud comprising a plurality of points each of which are attributed with at least a geospatial coordinate, sub-sampling at least a portion of the plurality of points to derive a representative sample of the plurality of points and displaying the representative sample of the plurality of points.
VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY SYSTEM, VEHICLE DISPLAY METHOD, AND NON-TRANSITORY STORAGE MEDIUM STORING A PROGRAM
A vehicle display device, includes: a curve information acquisition section configured to acquire information relating to a degree of curvature of a travel lane; a deceleration determination section configured to determine whether or not deceleration of a vehicle is required based on the degree of curvature of the travel lane and a vehicle speed; and a marking display section configured to, in a case in which deceleration is required, cause a display device to display a predetermined first marking superimposed along the travel lane at a display region inside a vehicle cabin.
Laser safety system
A laser safety system adapted to prevent inadvertent illumination of people and assets. The laser safety system configured to emit a laser beam with a laser and determine a path of a target object relative to the laser safety system. The laser safety system configured to cause the laser beam to illuminate the target object while the target object moves along the path.
Laser safety system
A laser safety system adapted to prevent inadvertent illumination of people and assets. The laser safety system configured to emit a laser beam with a laser and determine a path of a target object relative to the laser safety system. The laser safety system configured to cause the laser beam to illuminate the target object while the target object moves along the path.
AUTOMATED DETECTION OF MISTRACK CONDITIONS FOR SELF-PROPELLED WORK VEHICLES
A system and method are provided for determining mistrack conditions in work vehicles such as excavators having first and second tracks. A controller uses data from onboard sensors (e.g., cameras, lidar) having an external field of view to detect a first position of, e.g., a track of the work vehicle relative to a first external point in a local reference system independent of a global reference system and to detect, upon the work vehicle having advanced from the detected first position a predetermined distance, a second position of the at least first component of the work vehicle relative to a second external point in the local reference system. The controller further determines an amount of mistrack error corresponding to a difference between the detected second position and an expected second position, and generates an output signal based on the determined amount of mistrack error.
AUTOMATED DETECTION OF MISTRACK CONDITIONS FOR SELF-PROPELLED WORK VEHICLES
A system and method are provided for determining mistrack conditions in work vehicles such as excavators having first and second tracks. A controller uses data from onboard sensors (e.g., cameras, lidar) having an external field of view to detect a first position of, e.g., a track of the work vehicle relative to a first external point in a local reference system independent of a global reference system and to detect, upon the work vehicle having advanced from the detected first position a predetermined distance, a second position of the at least first component of the work vehicle relative to a second external point in the local reference system. The controller further determines an amount of mistrack error corresponding to a difference between the detected second position and an expected second position, and generates an output signal based on the determined amount of mistrack error.
SPARSE UNDER DISPLAY LIDAR
A system to sample light including an array of light-sensitive pixels and a content display. The content display includes an array of content-display pixels; and an array of masking pixels individually selectable to switch between an opaque state and a transparent state, the array of masking pixels being aligned with the array of light-sensitive pixels so light-sensitive pixels may be selected to receive light passing through the content display by selecting the states of the array of masking pixels.
SPARSE UNDER DISPLAY LIDAR
A system to sample light including an array of light-sensitive pixels and a content display. The content display includes an array of content-display pixels; and an array of masking pixels individually selectable to switch between an opaque state and a transparent state, the array of masking pixels being aligned with the array of light-sensitive pixels so light-sensitive pixels may be selected to receive light passing through the content display by selecting the states of the array of masking pixels.
User interface for displaying point clouds generated by a LiDAR device on a UAV
Techniques are disclosed for real-time mapping in a movable object environment. A system for real-time mapping in a movable object environment, may include at least one movable object including a computing device, a scanning sensor electronically coupled to the computing device, and a positioning sensor electronically coupled to the computing device. The system may further include a client device in communication with the at least one movable object, the client device including a visualization application which is configured to receive point cloud data from the scanning sensor and position data from the positioning sensor, record the point cloud data and the position data to a storage location, generate a real-time visualization of the point cloud data and the position data as it is received, and display the real-time visualization using a user interface provided by the visualization application.
User interface for displaying point clouds generated by a LiDAR device on a UAV
Techniques are disclosed for real-time mapping in a movable object environment. A system for real-time mapping in a movable object environment, may include at least one movable object including a computing device, a scanning sensor electronically coupled to the computing device, and a positioning sensor electronically coupled to the computing device. The system may further include a client device in communication with the at least one movable object, the client device including a visualization application which is configured to receive point cloud data from the scanning sensor and position data from the positioning sensor, record the point cloud data and the position data to a storage location, generate a real-time visualization of the point cloud data and the position data as it is received, and display the real-time visualization using a user interface provided by the visualization application.