Patent classifications
G01S7/51
Sliding window discrete Fourier transform (SWDFT) police signal warning receiver
In one embodiment, a police activity detector is provided. The detector includes a receiver section and a warning section. The receiver section is configured to receive signals generated in the context of law enforcement activity. The warning section is configured to respond to a pulsed signal received by the receiver section and provide an alert if a received signal correlates to a law enforcement signal. The warning section also includes a sliding window discrete Fourier transform (SWDFT) module configured to receive a plurality of time series of data corresponding to sampling a received pulsed signal at a set of sample rates corresponding to a plurality of target frequencies, perform a SWDFT determination on each of the plurality of time series of data to determine a magnitude of the received signal at each of the targeted frequencies, which reveals the presence of a received pulsed signal when the magnitude is elevated, and issue an alert if the magnitude of the received signal corresponding to at least one of the targeted frequencies is greater than or equal to a predetermined threshold.
Efficient algorithm for projecting world points to a rolling shutter image
An improved, efficient method for mapping world points from an environment (e.g., points generated by a LIDAR sensor of an autonomous vehicle) to locations (e.g., pixels) within rolling-shutter images taken of the environment is provided. This improved method allows for accurate localization of the world point in a rolling-shutter image via an iterative process that converges in very few iterations. The method poses the localization process as an iterative process for determining the time, within the rolling-shutter exposure period of the image, at which the world point was imaged by the camera. The method reduces the number of times the world point is projected into the normalized space of the camera image, often converging in three or fewer iterations.
Laser scanner with enhanced dymanic range imaging
A system and method for measuring three-dimensional (3D) coordinates is provided. The method includes rotating a 3D scanner about a first axis, the 3D scanner having a light source, a light receiver and a color camera. A light beams are emitted from the light source and reflected light beams are received with the light receiver. A processor determines 3D coordinates of points on the object based on the emitted light beams and the reflected light beams. For each of the points an intensity value is measured based on the reflected light beams. A color image of the object is acquired with the color camera. The intensity values are fused with the color image to generate an enhanced image, the enhanced image includes color data. Color data is merged with the 3D coordinates of the points. The 3D coordinates of the points are stored with the color data.
INTENSITY DATA VISUALIZATION
Techniques for coloring a point cloud based on colors derived from LIDAR (light detection and ranging) intensity data are disclosed. In some embodiments, the coloring of the point cloud may employ an activation function that controls the colors assigned to different intensity values. Further, the activation function may be parameterized based on statistics computed for a distribution of intensities associated with a 3D scene and a user-selected sensitivity. Alternatively, a Fourier transform of the distribution of intensities or a clustering of the intensities may be used to estimate individual distributions associated with different materials, based on which the point cloud coloring may be determined from intensity data.
INTENSITY DATA VISUALIZATION
Techniques for coloring a point cloud based on colors derived from LIDAR (light detection and ranging) intensity data are disclosed. In some embodiments, the coloring of the point cloud may employ an activation function that controls the colors assigned to different intensity values. Further, the activation function may be parameterized based on statistics computed for a distribution of intensities associated with a 3D scene and a user-selected sensitivity. Alternatively, a Fourier transform of the distribution of intensities or a clustering of the intensities may be used to estimate individual distributions associated with different materials, based on which the point cloud coloring may be determined from intensity data.
Methods and Systems for LIDAR Optics Alignment
A method is provided that involves mounting a transmit block and a receive block in a LIDAR device to provide a relative position between the transmit block and the receive block. The method also involves locating a camera at a given position at which the camera can image light beams emitted by the transmit block and can image the receive block. The method also involves obtaining, using the camera, a first image indicative of light source positions of one or more light sources in the transmit block and a second image indicative of detector positions of one or more detectors in the receive block. The method also involves determining at least one offset based on the first image and the second image. The method also involves adjusting the relative position between the transmit block and the receive block based at least in part on the at least one offset.
Methods and Systems for LIDAR Optics Alignment
A method is provided that involves mounting a transmit block and a receive block in a LIDAR device to provide a relative position between the transmit block and the receive block. The method also involves locating a camera at a given position at which the camera can image light beams emitted by the transmit block and can image the receive block. The method also involves obtaining, using the camera, a first image indicative of light source positions of one or more light sources in the transmit block and a second image indicative of detector positions of one or more detectors in the receive block. The method also involves determining at least one offset based on the first image and the second image. The method also involves adjusting the relative position between the transmit block and the receive block based at least in part on the at least one offset.
THREAT DETECTION AND NOTIFICATION SYSTEM FOR PUBLIC SAFETY VEHICLES
The present invention is directed to a threat detection and notification system which preferably comprises a LIDAR unit capable of detecting an object in its field of view during a first and subsequent refresh scans, and outputting point cloud data representative of the object detected in a refresh frame corresponding to every refresh scan. A server is operatively connected to the LIDAR unit and is capable of receiving the point cloud data and determining, for the object detected within each refresh frame, a predetermined set of object-specific attributes. A user control interface is operatively connected to the server and is capable of creating a watch zone defining an area of interest less than the LIDAR unit's field of view, and is further capable of defining a plurality of watch mode parameters associated with the watch zone. A LIDAR alert engine is operatively connected to the server and is capable of determining whether the object detected in each refresh frame is located within the watch zone, calculating at least one indicant of motion of the object, and comparing at least one object-specific attribute and the at least one indicant of motion of the object to the defined watch mode parameters. The LIDAR alert engine is capable of alerting a predetermined user to the presence of the object in the event the object is within the watch zone and the aforementioned comparison indicates an alert condition.
THREAT DETECTION AND NOTIFICATION SYSTEM FOR PUBLIC SAFETY VEHICLES
The present invention is directed to a threat detection and notification system which preferably comprises a LIDAR unit capable of detecting an object in its field of view during a first and subsequent refresh scans, and outputting point cloud data representative of the object detected in a refresh frame corresponding to every refresh scan. A server is operatively connected to the LIDAR unit and is capable of receiving the point cloud data and determining, for the object detected within each refresh frame, a predetermined set of object-specific attributes. A user control interface is operatively connected to the server and is capable of creating a watch zone defining an area of interest less than the LIDAR unit's field of view, and is further capable of defining a plurality of watch mode parameters associated with the watch zone. A LIDAR alert engine is operatively connected to the server and is capable of determining whether the object detected in each refresh frame is located within the watch zone, calculating at least one indicant of motion of the object, and comparing at least one object-specific attribute and the at least one indicant of motion of the object to the defined watch mode parameters. The LIDAR alert engine is capable of alerting a predetermined user to the presence of the object in the event the object is within the watch zone and the aforementioned comparison indicates an alert condition.
ELECTRONIC DEVICE INCLUDING OPTICAL SENSOR MODULE
In embodiments, an electronic device may include a housing, a display panel, and an optical sensor module. The display panel is disposed in an inner space of the housing and is at least partially visible from an outside through the housing, the display including a display area, a first non-display area disposed adjacent to at least a peripheral portion of the display area, and a second non-display area disposed adjacent to at least a peripheral portion of the first non-display area. The optical sensor module is disposed in the inner space at least partially overlapping the display panel, and includes a flexible printed circuit board (FPCB), a light emitting structure disposed on the FPCB at least partially overlapping the first non-display area when the display panel is viewed from above, and a light receiving structure disposed on the FPCB at least partially overlapping the display area when the display panel is viewed from above. The display area has a first transmittance, and the first non-display area has a second transmittance greater than the first transmittance.