G01S3/8006

EMERGENCY VEHICLE DETECTION

Techniques for determining a direction of arrival of an emergency are discussed. A plurality of audio sensors of a vehicle can receive audio data associated with the vehicle. An audio sensor pair can be selected from the plurality of audio sensors to generate audio data representing sound in an environment of the vehicle. An angular spectrum associated with the audio sensor pair can be determined based on the audio data. A feature associated with the audio data can be determined based on the angular spectrum and/or the audio data itself. A direction of arrival (DoA) value associated with the sound may be determined based on the feature using a machine learned model. An emergency sound (e.g., a siren) can be detected in the audio data and a direction associated with the emergency relative to the vehicle can be determined based on the feature and the DoA value.

SINGLE ANTENNA DIRECTION FINDING AND LOCALIZATION
20210080533 · 2021-03-18 ·

Single antenna direction finding is performed by physically moving a device to different device positions. As the device is physically moved, signal processing hardware within the device is used to make a plurality of signal response measurements of a signal detected by a single antenna of the device. The signal emanates from an object. The plurality of signal response measurements are made by sampling signal response at a plurality of sample times. A means for 3-dimensional positioning makes a plurality of inertial measurements at the plurality of sample times. The plurality of signal response measurements and the plurality of inertial measurements are used to produce a virtual response array vector. The virtual response array vector is used to calculate a direction of arrival from the object to the device.

SIMULTANEOUS ACOUSTIC EVENT DETECTION ACROSS MULTIPLE ASSISTANT DEVICES
20230419951 · 2023-12-28 ·

Implementations can detect respective audio data that captures an acoustic event at multiple assistant devices in an ecosystem that includes a plurality of assistant devices, process the respective audio data locally at each of the multiple assistant devices to generate respective measures that are associated with the acoustic event using respective event detection models, process the respective measures to determine whether the detected acoustic event is an actual acoustic event, and cause an action associated with the actional acoustic event to be performed in response to determining that the detected acoustic event is the actual acoustic event. In some implementations, the multiple assistant devices that detected the respective audio data are anticipated to detect the respective audio data that captures the actual acoustic event based on a plurality of historical acoustic events being detected at each of the multiple assistant devices.

Systems and methods for displaying a user interface

An electronic device includes a display, wherein the display is configured to present a user interface, wherein the user interface comprises a coordinate system. The coordinate system corresponds to physical coordinates. The display is configured to present a sector selection feature that allows selection of at least one sector of the coordinate system. The at least one sector corresponds to captured audio from multiple microphones. The sector selection may also include an audio signal indicator. The electronic device includes operation circuitry coupled to the display. The operation circuitry is configured to perform an audio operation on the captured audio corresponding to the audio signal indicator based on the sector selection.

MULTIPLE-SOURCE TRACKING AND VOICE ACTIVITY DETECTIONS FOR PLANAR MICROPHONE ARRAYS
20210219053 · 2021-07-15 ·

Embodiments described herein provide a combined multi-source time difference of arrival (TDOA) tracking and voice activity detection (VAD) mechanism that is applicable for generic array geometries, e.g., a microphone array that lies on a plane. The combined multi-source TDOA tracking and VAD mechanism scans the azimuth and elevation angles of the microphone array in microphone pairs, based on which a planar locus of physically admissible TDOAs can be formed in the multi-dimensional TDOA space of multiple microphone pairs. In this way, the multi-dimensional TDOA tracking reduces the number of calculations that was usually involved in traditional TDOA by performing the TDOA search for each dimension separately.

Multiple-source tracking and voice activity detections for planar microphone arrays

Embodiments described herein provide a combined multi-source time difference of arrival (TDOA) tracking and voice activity detection (VAD) mechanism that is applicable for generic array geometries, e.g., a microphone array that lies on a plane. The combined multi-source TDOA tracking and VAD mechanism scans the azimuth and elevation angles of the microphone array in microphone pairs, based on which a planar locus of physically admissible TDOAs can be formed in the multi-dimensional TDOA space of multiple microphone pairs. In this way, the multi-dimensional TDOA tracking reduces the number of calculations that was usually involved in traditional TDOA by performing the TDOA search for each dimension separately.

System for receiving communications
10877124 · 2020-12-29 ·

Methods and systems for spatial filtering transmitters and receivers capable of simultaneous communication with one or more receivers and transmitters, respectively, the receivers capable of outputting source directions to humans or devices. The methods and systems use spherical wave field partial wave expansion (PWE) models for transmitted and received fields at antennas and for waves generated by contributing sources. The source PWE models have expansion coefficients expressed as functions of directional coordinates of the sources. For spatial filtering receivers a processor uses the output signals from at least one sensor outputting signals consistent with Nyquist criteria representative of the wave field and the source PWE model to determines directional coordinates of sources (wherein the number of floating point operations are reduced) and outputs the directional coordinates and communications to a reporter configured for reporting information to humans. For spatial filtering transmitters a processor uses known receiver directions and source partial wave expansions to generate signals for transducers producing a composite total wave field conveying communications to the specified receivers. The methods and communications reduce the processing required for transmitting and receiving spatially filtered communications.

AZIMUTH ESTIMATION METHOD, DEVICE, AND STORAGE MEDIUM
20200395005 · 2020-12-17 ·

Embodiments of this application discloses an azimuth estimation method performed at a computing device, the method including: obtaining, in real time, multi-channel sampling signals and buffering the multi-channel sampling signals; performing wakeup word detection on one or more sampling signals of the multi-channel sampling signals, and determining a wakeup word detection score for each channel of the one or more sampling signals; performing a spatial spectrum estimation on the buffered multi-channel sampling signals to obtain a spatial spectrum estimation result, when the wakeup word detection scores of the one or more sampling signals indicates that a wakeup word exists in the one or more sampling signals; and determining an azimuth of a target voice associated with the multi-channel sampling signals according to the spatial spectrum estimation result and a highest wakeup word detection score, thereby improving the accuracy of the azimuth estimation in a voice interaction process.

DEVICE FOR LOCATION BY ULTRASOUND

The invention relates to a device for locating a target, comprising: a generator of ultrasonic waves that can be reflected by the target; pairs of first and second sensors repeated in a first direction, the first and second sensors of each pair being arranged in a second direction different from the first direction; and a processing unit suitable for: a) for each pair of sensors, measuring the phase shift between the ultrasonic waves received by the first sensor and by the second sensor; and b) establishing that the target is found on a surface corresponding to the differences between measured phase shifts.

Apparatus and a method for unwrapping phase differences

An apparatus configured to obtain a plurality of phase differences is provided. Each phase difference represents a difference in phase between a spectral component in a first sound signal and a corresponding spectral component in a second sound signal, to estimate an aliasing frequency. Phase differences in frequency bands above the aliasing frequency are expected to be wrapped, and to unwrap the plurality of phase differences in dependence on the estimated aliasing frequency. A phase difference obtained from spectral components that are comprised in one frequency band is unwrapped independently of any phase difference obtained from spectral components that are comprised in another frequency band.