Patent classifications
G01S5/20
CONTROLLING A DEVICE BY TRACKING MOVEMENT OF HAND USING ACOUSTIC SIGNALS
A method, device and computer program product for controlling the device by tracking a movement of a hand or other objects. The device receives acoustic signals. At least a portion of the received signals are transformed into two-dimensional sinusoids whose frequencies are proportional to an angle-of-arrival (AoA) and a propagation distance of the reflected signals. An AoA-di stance profile is derived based on signals received from the object by evaluating frequencies of the two-dimensional sinusoids. Then, an AoA-di stance pair is derived from the AoA-di stance profile. A current location of the object is determined based on the estimated AoA-di stance pair. The device then performs a command in response to detecting that the user moved to perform the command based on prior and current locations of the object.
CONTROLLING A DEVICE BY TRACKING MOVEMENT OF HAND USING ACOUSTIC SIGNALS
A method, device and computer program product for controlling the device by tracking a movement of a hand or other objects. The device receives acoustic signals. At least a portion of the received signals are transformed into two-dimensional sinusoids whose frequencies are proportional to an angle-of-arrival (AoA) and a propagation distance of the reflected signals. An AoA-di stance profile is derived based on signals received from the object by evaluating frequencies of the two-dimensional sinusoids. Then, an AoA-di stance pair is derived from the AoA-di stance profile. A current location of the object is determined based on the estimated AoA-di stance pair. The device then performs a command in response to detecting that the user moved to perform the command based on prior and current locations of the object.
AI APPARATUS AND METHOD FOR DETERMINING LOCATION OF USER
Provided is an AI apparatus for determining a location of a user including: a communication unit configured to communicate with at least one external AI apparatus obtaining first image data and first sound data; a memory configured to store location information on the at least one external AI apparatus and the AI apparatus; a camera configured to obtain second image data; a microphone configured to obtain second sound data; and a processor configured to: generate first recognition information on the user based on the second image data; generate second recognition information on the user based on the second sound data; obtain, from the at least one external AI apparatus, third recognition information on the user generated based on the first image data and fourth recognition information on the user generated based on the first sound data; determine the user's location based on the location information, the first recognition information, and the third recognition information; and calibrate the determined user's location based on the second recognition information and the fourth recognition information.
Transcoder enabled cloud of remotely controlled devices
Various embodiments are directed to one or more transcoder devices in communication with an input device such as a remote control device and multiple destination devices in which the transcoder device(s) facilitate communication between the remote control and the various destination devices in the vicinity. The transcoder device(s) can also provide the user with an environmental awareness of conditions and events surrounding the user. Other embodiments are described and claimed.
Transcoder enabled cloud of remotely controlled devices
Various embodiments are directed to one or more transcoder devices in communication with an input device such as a remote control device and multiple destination devices in which the transcoder device(s) facilitate communication between the remote control and the various destination devices in the vicinity. The transcoder device(s) can also provide the user with an environmental awareness of conditions and events surrounding the user. Other embodiments are described and claimed.
Audio Distance Estimation for Spatial Audio Processing
A method for spatial audio signal processing including: determining at least one first direction parameter for at least one frequency band based on microphone signals received from a first microphone array; determining at least one second direction parameter for the at least one frequency band based on at least one microphone signal received from at least one second microphone, wherein microphones from the first microphone array and the at least one second microphone are spatially separated from each other; processing the determined at least one first direction parameter and the at least one second direction parameter to determine at least one distance parameter for the at least one frequency band; and enabling an output and/or store of the at least one distance parameter, at least one audio signal, and the at least one first direction parameter.
METHOD AND SYSTEM FOR LOCATING THE ORIGIN OF AN AUDIO SIGNAL WITHIN A DEFINED SPACE
A method and system for identifying a sensor node located closest to the origin of an audio signal. There can be at least three sensor nodes connected to a computational node, and each sensor node includes an audio directional sensor and a device for providing a reference direction. The sensor nodes can receive the audio signal and each audio directional sensor can provide an angle of propagation of the audio signal relative to the reference direction. The angular mean of the measured angles of propagation from all sensor nodes is calculated and the sensor node providing the angle which is closest to the angular mean is defined as the sensor node being closest to the origin of the audio signal.
Frictionless access control system providing ultrasonic user location
A frictionless access control system and method providing ultrasonic user location are disclosed. The access control system authorizes users within proximity of an access point such as a door, based upon user information (e.g. credentials) sent in RF wireless messages from user devices carried by the users. When the users are authorized, the system instructs the user devices to transmit coded acoustic signals. An positioning unit at the access point includes an ultrasonic microphone array, which is located above the access point and detects the acoustic signals. The positioning unit determines an angle of arrival (AoA) of the acoustic signals at microphones of the array, and determines positions of the users relative to the access point from the AoA. In one implementation, pre-authorized users are granted access when the determined positions of the users are within an inner zone of the access point.
Frictionless access control system providing ultrasonic user location
A frictionless access control system and method providing ultrasonic user location are disclosed. The access control system authorizes users within proximity of an access point such as a door, based upon user information (e.g. credentials) sent in RF wireless messages from user devices carried by the users. When the users are authorized, the system instructs the user devices to transmit coded acoustic signals. An positioning unit at the access point includes an ultrasonic microphone array, which is located above the access point and detects the acoustic signals. The positioning unit determines an angle of arrival (AoA) of the acoustic signals at microphones of the array, and determines positions of the users relative to the access point from the AoA. In one implementation, pre-authorized users are granted access when the determined positions of the users are within an inner zone of the access point.
METHOD OF WIRELESS GEOLOCATED INFORMATION COMMUNICATION IN SELF-VERIFYING ARRAYS
Methods and apparatus for transmitting information associated with verified positions of Nodes based upon wireless communications between Nodes included in an array. Values for variables derived from multiple wireless transmissions between Nodes are aggregated, and a position of a particular Node may be determined based upon multiple data sets generated by multiple communications between disparate Nodes. Information is geolocated based upon the respective positions of the disparate Nodes. A user interface may provide a pictorial view of positions of all or some Nodes in an array, as well as associated information.