G01S5/20

Transcoder enabled cloud of remotely controlled devices
09736546 · 2017-08-15 · ·

Various embodiments are directed to one or more transcoder devices in communication with an input device such as a remote control device and multiple destination devices in which the transcoder device(s) facilitate communication between the remote control and the various destination devices in the vicinity. The transcoder device(s) can also provide the user with an environmental awareness of conditions and events surrounding the user. Other embodiments are described and claimed.

Transcoder enabled cloud of remotely controlled devices
09736546 · 2017-08-15 · ·

Various embodiments are directed to one or more transcoder devices in communication with an input device such as a remote control device and multiple destination devices in which the transcoder device(s) facilitate communication between the remote control and the various destination devices in the vicinity. The transcoder device(s) can also provide the user with an environmental awareness of conditions and events surrounding the user. Other embodiments are described and claimed.

Limited Access Community Surveillance System
20170223314 · 2017-08-03 ·

A surveillance system includes at least one controllable first video camera (422) that has a field of vision (408). A geographically-aware event indicating device (426) has a known spatial relationship to the first video camera (422). A processor (110) is in communication with the first video camera (422) and the geographically-aware event indicating device (426). The processor (110) is programmed to aim the first video camera (422) along in a direction indicated by the geographically-aware event indicating device (426) when the geographically-aware event indicating device (426) indicates an event that is consistent with predetermined criteria. A computer readable memory (112) is in communication with the processor (110) and stores video data from the first video camera (422) and sound data from the first acoustic sensor.

METHODS OF COMMUNICATING GEOLOCATED DATA BASED UPON A SELF-VERIFYING ARRAY OF NODES

Methods and apparatus for verifying respective positions of Nodes based upon ultrawideband wireless communications between nodes included in an array. Values for variables derived from multiple wireless transmissions between the nodes are aggregated, and a position of a particular node may be determined based upon multiple data sets generated by multiple communications of disparate Nodes. Data is transmitted to a smart device based upon a position of a particular node and a direction of interest. In addition, the presence of an obstacle to wireless communication between some nodes may be derived from the data sets. A user interface may provide a pictorial view of positions of all or some Nodes in an array, as well as a perceived obstruction.

Method and apparatus for selecting voice-enabled device

A method and apparatus for selecting a voice-enabled device are disclosed. The voice-enabled device selecting apparatus may reduce the amount of communication load between a home IoT server and home IoT devices and minimize the amount of computation of the home IoT server by obtaining information related to the direction from which each voice recognition device receives a wakeup word from a plurality of voice recognition devices, determining the position where the wakeup word is spoken by using the information related to the direction from which the wakeup word is received, and selecting the voice recognition device closest to the speech position as a voice-enabled device. At least one of the voice enable device selecting apparatus, IoT device, and a server may be associated with an artificial intelligence (AI) module, an unmanned aerial vehicle (UAV) (or drone), a robot, an augmented reality (AR) device, a virtual reality (VR) device, and a device related to a 5G service.

Methods of communicating geolocated data based upon a self-verifying array of nodes

Methods and apparatus for verifying respective positions of Nodes based upon ultrawideband wireless communications between nodes included in an array. Values for variables derived from multiple wireless transmissions between the nodes are aggregated, and a position of a particular node may be determined based upon multiple data sets generated by multiple communications of disparate Nodes. Data is transmitted to a smart device based upon a position of a particular node and a direction of interest. In addition, the presence of an obstacle to wireless communication between some nodes may be derived from the data sets. A user interface may provide a pictorial view of positions of all or some Nodes in an array, as well as a perceived obstruction.

Method and system for locating the origin of an audio signal within a defined space
11350212 · 2022-05-31 · ·

A method and system for identifying a sensor node located closest to the origin of an audio signal. There can be at least three sensor nodes connected to a computational node, and each sensor node includes an audio directional sensor and a device for providing a reference direction. The sensor nodes can receive the audio signal and each audio directional sensor can provide an angle of propagation of the audio signal relative to the reference direction. The angular mean of the measured angles of propagation from all sensor nodes is calculated and the sensor node providing the angle which is closest to the angular mean is defined as the sensor node being closest to the origin of the audio signal.

SYSTEM FOR LOCALIZATION OF SOUND SOURCES

A sound or vibration source localization system with a master unit and a plurality of slave units. The master unit transmit a time synchronization signal via an RF link to the slave units. A microphone or vibration sensor in each of the slave units are used to record a short time sequence, e.g. 0.2-2 seconds, of sound or vibration time aligned with the time synchronization signal to ensure synchronous recording of the time sequences at all slave units. The slave unit transmit the recorded time aligned time sequences via an RF link along with a time stamp and an identification code to the master unit. The master unit has a processor system arranged to process the received time sequences from the slave units according to a lizard ear mimicking algorithm. Such type of algorithm provides a good direction estimate in response to two input signals recorded at different positions, even with a short time sequence. As a result, and preferably along with information regarding physical positions of the slave units, a sound source or vibration source localization estimate can be generated.

SYSTEM FOR LOCALIZATION OF SOUND SOURCES

A sound or vibration source localization system with a master unit and a plurality of slave units. The master unit transmit a time synchronization signal via an RF link to the slave units. A microphone or vibration sensor in each of the slave units are used to record a short time sequence, e.g. 0.2-2 seconds, of sound or vibration time aligned with the time synchronization signal to ensure synchronous recording of the time sequences at all slave units. The slave unit transmit the recorded time aligned time sequences via an RF link along with a time stamp and an identification code to the master unit. The master unit has a processor system arranged to process the received time sequences from the slave units according to a lizard ear mimicking algorithm. Such type of algorithm provides a good direction estimate in response to two input signals recorded at different positions, even with a short time sequence. As a result, and preferably along with information regarding physical positions of the slave units, a sound source or vibration source localization estimate can be generated.

AI apparatus and method for determining location of user
11182922 · 2021-11-23 · ·

Provided is an AI apparatus for determining a location of a user including: a communication unit configured to communicate with at least one external AI apparatus obtaining first image data and first sound data; a memory configured to store location information on the at least one external AI apparatus and the AI apparatus; a camera configured to obtain second image data; a microphone configured to obtain second sound data; and a processor configured to: generate first recognition information on the user based on the second image data; generate second recognition information on the user based on the second sound data; obtain, from the at least one external AI apparatus, third recognition information on the user generated based on the first image data and fourth recognition information on the user generated based on the first sound data; determine the user's location based on the location information, the first recognition information, and the third recognition information; and calibrate the determined user's location based on the second recognition information and the fourth recognition information.