G01S3/8006

SOUND SOURCE DIRECTION ESTIMATION DEVICE AND METHOD, AND PROGRAM

The present technology relates to a sound source direction estimation device and method, and a program that can reduce an operation amount for estimating a direction of a target sound source. A first estimation unit estimates a first horizontal angle that is a horizontal angle of a sound source direction from an input acoustic signal. A second estimation unit estimates a second horizontal angle that is the horizontal angle of the sound source direction and an elevation angle, with respect to the first horizontal angle, in a predetermined range near the first horizontal angle. The present technology can be applied, in a case where a voice is uttered from a surrounding sound source (for example, a person), to a device having a function of estimating the direction in which the voice is uttered.

Method of providing sound tracking information, sound tracking apparatus for vehicles, and vehicle having the same
10810881 · 2020-10-20 · ·

Disclosed are a method of providing sound tracking information, a sound tracking apparatus for vehicles and a vehicle having the sound tracking apparatus. The method of providing sound tracking information includes generating sound tracking results based on sound data generated by sensing sound generated around a vehicle, calculating 3D coordinates of a target sound source according to angle values of the target sound source recognized from the sound tracking results, and generating a notification of the target sound source based on the 3D coordinates, and the sound tracking results include information on probabilities that an object corresponding to the target sound source is present at respective angles in each of continuous frames according to time.

VOICE INPUT DEVICE AND METHOD, AND PROGRAM

The present technology relates to a voice input device and method, and a program that facilitate estimation of an utterance direction The voice input device includes: a fixed part disposed at a predetermined position; a movable part movable with respect to the fixed part; a microphone array attached to the fixed part; an utterance direction estimation unit configured to estimate an utterance direction on the basis of a voice from an utterer that is input from the microphone array; and a driving unit configured to drive the movable part according to the estimated utterance direction. The voice input device can be used by installation in, for example, a smart speaker, a voice agent, a robot, and the like.

DUAL ACOUSTIC PRESSURE AND HYDROPHONE SENSOR ARRAY SYSTEM

An aspect of the invention is directed to a system of both atmospheric and underwater sensors for measuring pressure waves from a noise source. A system of pressure sensors can be formed to determine the location of an external noise source, whether in air or underwater. The system includes at least two arrays consisting of pressure sensors, including at least one atmospheric pressure sensor and at least one underwater pressure sensor, such as a hydrophone. Each sensor may be a seven-fiber intensity modulated fiber optic pressure sensor. The system includes an analog to digital converter for digitizing the pressure data received from each sensor and a processor which processes the received signals to calculate an approximate location of the noise source based upon the pressure signals received by the sensors at different times of arrival. The system can provide this capability in remote applications due to its low power requirements.

DEVICE AND METHOD FOR ESTIMATING DIRECTION OF ARRIVAL OF SOUND FROM A PLURALITY OF SOUND SOURCES
20200249308 · 2020-08-06 ·

A device estimates direction of arrival (DOA) of sound from custom-character sound sources received by P microphones, wherein Pcustom-character>1. The device is configured to transform the output signals of the microphones into the frequency domain and compute a covariance matrix for each of N frequency bins in a range of frequencies of the sound. Further, the device is configured to calculate an adapted covariance matrix from each of the covariance matrices for wide-band merging, calculate an accumulated covariance matrix from the N adapted covariance matrices, and estimate the DOA for each of the sound sources based on the accumulated covariance matrix. In order to calculate an adapted covariance matrix from a covariance matrix, the device is configured to spectrally decompose the covariance matrix and obtain a plurality of eigenvectors, rotate each obtained eigenvector, and construct each rotated eigenvector back to the shape of the covariance matrix.

USING CLASSIFIED SOUNDS AND LOCALIZED SOUND SOURCES TO OPERATE AN AUTONOMOUS VEHICLE
20200241552 · 2020-07-30 ·

An ambient sound environment is captured by a microphone array of an autonomous vehicle traveling in the ambient sound environment. A perception module of the autonomous vehicle classifies sounds and localizes sound sources in the ambient sound environment. Classification is performed using spectrum analysis and/or machine learning. In an embodiment, sound sources within a field of view (FOV) of an image sensor of the autonomous vehicle are localized in a visual scene generated by the perception module. In an embodiment, one or more sound sources outside the FOV of the image sensors are localized in a static digital map. Localization is performed using parametric or non-parametric techniques and/or machine learning. The output of the perception module is input into a planning module of the autonomous vehicle to plan a route or trajectory for the autonomous vehicle in the ambient sound environment.

System and method for autonomous joint detection-classification and tracking of acoustic signals of interest

Systems and methods are disclosed for autonomous joint detection-classification of acoustic sources of interest. Localization and tracking from unmanned marine vehicles are also described. Based on receiving acoustic signals originating above or below the surface, a processor can process the acoustic signals to determine the target of interest associated with the acoustic signal. The methods and systems autonomously and jointly detect and classify a target of interest. A target track can be generated corresponding to the locations of the detected target of interest. A classifier can be used representing spectral characteristics of a target of interest.

System for receiving communications
10690744 · 2020-06-23 ·

Methods and systems for spatial filtering transmitters and receivers capable of simultaneous communication with one or more receivers and transmitters, respectively, the receivers capable of outputting source directions to humans or devices. The methods and systems use spherical wave field partial wave expansion (PWE) models for transmitted and received fields at antennas and for waves generated by contributing sources. The source PWE models have expansion coefficients expressed as functions of directional coordinates of the sources. For spatial filtering receivers a processor uses the output signals from at least one sensor outputting signals consistent with Nyquist criteria representative of the wave field and the source PWE model to determines directional coordinates of sources (wherein the number of floating point operations are reduced) and outputs the directional coordinates and communications to a reporter configured for reporting information to humans. For spatial filtering transmitters a processor uses known receiver directions and source partial wave expansions to generate signals for transducers producing a composite total wave field conveying communications to the specified receivers. The methods and communications reduce the processing required for transmitting and receiving spatially filtered communcations.

SYSTEM AND METHOD FOR FEATURE BASED BEAM STEERING
20200184954 · 2020-06-11 ·

A method, computer program product, and computer system for identifying, by a computing device, a plurality of sources. One or more feature values of a plurality of features may be assigned to a first source of the plurality of sources. One or more feature values of the plurality of features may be assigned to a second source of the plurality of sources. A first score for the first source and a second score for the second source may be determined based upon, at least in part, the one or more feature values assigned to the first source and the second source. One of the first source and the second source may be selected for spatial processing based upon, at least in part, the first score for the first source and the second score for the second source.

Audio processing apparatus and audio processing method

An audio processing apparatus includes a first-section detection unit configured to detect a first section that is a section in which the power of a spatial spectrum in a sound source direction is higher than a predetermined amount of power on the basis of an audio signal of a plurality of channels, a speech state determination unit configured to determine a speech state on the basis of an audio signal within the first section, a likelihood calculation unit configured to calculate a first likelihood that a type of sound source according to an audio signal within the first section is voice and a second likelihood that the type of sound source is non-voice, and a second-section detection unit configured to determine whether or not a second section in which power is higher than average the power of a speech section is a voice section on the basis of the first likelihood and the second likelihood within the second section.