SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FACILITATING EFFICIENCY OF A GROUP WHOSE MEMBERS ARE ON THE MOVE
20240015432 ยท 2024-01-11
Inventors
- Erez SHARON (Karmei Yosef, IL)
- Slava KANDIBA (Ashdod, IL)
- Noam FRENKEL (Karmei Y osef, IL)
- Hen PINTO (Cesaria, IL)
Cpc classification
G01S5/20
PHYSICS
G01C21/005
PHYSICS
International classification
G01C21/00
PHYSICS
Abstract
An acoustic many-to-many localization, communication and management system serving a group whose members are moving or maneuvering, the system comprising plural portable hardware devices which may be distributed to plural group members respectively, each device including at least one array of speakers and/or at least one array of microphones, and/or at least one hardware processor, some or all typically co-located. Typically, the hardware processor in at least one device d1 from among the devices controls d1's speaker to at least once broadcast a first signal (e.g. localization request signal) at a time t_zero. Typically the hardware processor in device d1 at least once computes at least one of angle and distance between d2 and d1, to monitor locations of other group members who may be on the move.
Claims
1-23. (canceled)
24. An acoustic many-to-many localization, communication and management system serving a group whose members are moving or maneuvering, the system comprising: plural portable hardware devices which may be distributed to plural group members respectively, each device including at least one array of speakers, at least one array of microphones, and at least one hardware processor, all co-located, wherein at least one device's hardware processor P is configured to convert speech e.g. commands, captured by at least one processor P's co-located microphone, into ultrasonic signals which travel to a device whose processor P is not co-located with processor P and wherein processor P is configured to convert the ultrasonic signals, when received, back into sonic signals which are provided to, and played by, the speaker co-located with processor P, thereby to allow a group member co-located with processor P to hear speech uttered by a group member co-located with processor P.
25. The system of claim 24 wherein the hardware processor in one device d2 from among the devices controls d2's speaker is configured to do the following each time d2's microphone receives a localization request signal: to broadcast a second signal (localization response signal), at a time t_b which is separated by a value deltaT, known to the hardware processor in device d1, from a time t_r at which d2's microphone receives the localization request signal and wherein the same value deltaT is used by d2 each time d2's microphone receives a localization request signal.
26. The system of claim 24 wherein plural devices d2 broadcast localization response signals respectively assigned only to them and not to any other device from among the plural devices.
27. The system of claim 24 wherein the hardware processor in at least one device d1 from among the devices controls d1's speaker to at least once broadcast a first signal (localization request signal) at a time t_zero, wherein the hardware processor in device d1 at least once computes at least one of angle and distance between d2 and d1 thereby to monitor locations of other group members who may be on the move.
28. The system of claim 24 wherein d1's hardware processor is operative to control d1's speaker to send an alert to d2, to be played by d2's speaker, if the distance between d2 and d1 answers a criterion indicating that d2 is almost outside of d1's range.
29. The system of claim 24 wherein the system has location marking functionality including providing oral prompts aiding group members to navigate to a location that has been marked.
30. The system of claim 24 wherein the system has homing functionality including providing oral prompts aiding all group members to navigate toward a single group member.
31. The system of claim 24 wherein a group has a known total number of members and wherein the system has roll call or group member counting functionality which provides alerts to at least one group member when a depleted number of group members, less than the known total number of members, is recorded.
32. The system of claim 24 wherein the system has threat detection and localization functionality which provides alerts to at least one group member when a learned acoustic signature of a threat is sensed by at least one group member's microphone.
33. The system of claim 24 wherein said at least one microphone array includes at least 3 microphones, thereby to facilitate triangulation and wherein each device is configured to use triangulation to discern azimuthal orientation of at least one group member.
34. The system of claim 33 wherein the system provides at least one alert to at least one group member when at least one group member is azimuthally off course.
35. The system of claim 24 which has human-to-human communication functionality which provides group members with an ability to speak to each other in natural language.
36. The system of claim 24 which has device-to-human communication functionality which presents a command provided by an individual group member's hardware processor, to group members other than said individual group member.
37. The system of claim 24 which has device-to-device communication functionality which communicates data generated by an individual group member's hardware processor, to at least one hardware processor in a device distributed to at least one group member other than said individual group member.
38. The system of claim 24 wherein said at least one speaker comprises an array of speakers.
39. The system of claim 24 wherein said at least one microphone comprises an array of microphones.
40. The system of claim 24 wherein the device may be operated only after an authorization process.
41. The system of claim 24 wherein the value deltaT (_T) used by any given one of the plural devices d2 is different from the value deltaT (_T) used by any other of the plural devices d2, thereby to reduce interference between plural localization response signals being received by device d1.
42. An acoustic many-to-many localization, communication and management method serving a group whose members are moving or maneuvering, the method comprising: providing plural portable hardware devices for distribution to plural group members respectively, each device including at least one array of speakers, at least one array of microphones, and at least one hardware processor, all co-located, wherein the hardware processor in at least one device d1 from among the devices controls d1's speaker to at least once broadcast a first signal (localization request signal) at a time t_zero; and wherein the hardware processor in device d1 at least once computes at least one of angle and distance between d2 and d1 thereby to monitor locations of other group members who may be on the move.
43. A method according to claim 42 and wherein at least one device possesses independent location knowledge and wherein at least one group members relative location monitored by the method is transformed into an absolute location using said independent location knowledge.
44. A method according to claim 43 wherein said independent location knowledge comprises GPS data.
45. A method according to claim 43 wherein localization of all group members is provided even while on the move, using only one reference device.
46. A method according to claim 42 and wherein all relative locations are transformed into absolute locations, thereby to facilitate localization of all group members even while on the move, using but a single reference device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0118] Example embodiments are illustrated in the various drawings. Specifically:
[0119]
[0120] Arrows between modules may be implemented as APIs and any suitable technology may be used for interconnecting functional components or modules illustrated herein in a suitable sequence or order e.g. via a suitable API/Interface. For example, state of the art tools may be employed, such as but not limited to Apache Thrift and Avro which provide remote call support. Or, a standard communication protocol may be employed, such as but not limited to HTTP or MQTT, and may be combined with a standard data format, such as but not limited to JSON or XML.
[0121] Methods and systems included in the scope of the present invention may include any subset or all of the functional blocks shown in the specifically illustrated implementations by way of example, in any suitable order e.g. as shown. Flows may include all or any subset of the illustrated operations, suitably ordered e.g. as shown.
[0122] Computational, functional or logical components described and illustrated herein can be implemented in various forms, for example, as hardware circuits such as but not limited to custom VLSI circuits or gate arrays or programmable hardware devices such as but not limited to FPGAs, or as software program code stored on at least one tangible or intangible computer readable medium and executable by at least one processor, or any suitable combination thereof. A specific functional component may be formed by one particular sequence of software code, or by a plurality of such, which collectively act or behave or act as described herein with reference to the functional component in question. For example, the component may be distributed over several code sequences such as but not limited to objects, procedures, functions, routines and programs and may originate from several computer files which typically operate synergistically.
[0123] Each functionality or method herein may be implemented in software (e.g. for execution on suitable processing hardware such as a microprocessor or digital signal processor), firmware, hardware (using any conventional hardware technology such as Integrated Circuit technology) or any combination thereof.
[0124] Functionality or operations stipulated as being software-implemented may alternatively be wholly or fully implemented by an equivalent hardware or firmware module and vice-versa. Firmware implementing functionality described herein, if provided, may be held in any suitable memory device and a suitable processing unit (aka processor) may be configured for executing firmware code. Alternatively, certain embodiments described herein may be implemented partly or exclusively in hardware in which case all or any subset of the variables, parameters, and computations described herein may be in hardware.
[0125] Any module or functionality described herein may comprise a suitably configured hardware component or circuitry. Alternatively or in addition, modules or functionality described herein may be performed by a general purpose computer or more generally by a suitable microprocessor, configured in accordance with methods shown and described herein, or any suitable subset, in any suitable order, of the operations included in such methods, or in accordance with methods known in the art.
[0126] Any logical functionality described herein may be implemented as a real time application, if and as appropriate, and which may employ any suitable architectural option such as but not limited to FPGA, ASIC or DSP or any suitable combination thereof.
[0127] Any hardware component mentioned herein may in fact include either one or more hardware devices e.g. chips, which may be co-located or remote from one another.
[0128] Any method described herein is intended to include within the scope of the embodiments of the present invention also any software or computer program performing all or any subset of the method's operations, including a mobile application, platform or operating system e.g. as stored in a medium, as well as combining the computer program with a hardware device to perform all or any subset of the operations of the method.
[0129] Data can be stored on one or more tangible or intangible computer readable media stored at one or more different locations, different network nodes or different storage devices at a single node or location.
[0130] It is appreciated that any computer data storage technology, including any type of storage or memory and any type of computer components and recording media that retain digital data used for computing for an interval of time, and any type of information retention technology, may be used to store the various data provided and employed herein. Suitable computer data storage or information retention apparatus may include apparatus which is primary, secondary, tertiary or off-line; which is of any type or level or amount or category of volatility, differentiation, mutability, accessibility, addressability, capacity, performance and energy use; and which is based on any suitable technologies such as semiconductor, magnetic, optical, paper and others.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
[0131] Reference is now made to the system of
[0132] According to certain embodiments, a team, or group of task force members, is equipped with plural devices e.g. one per task force member. Each device may be wearable (by a task force member) or portable or mobile, or on wheels, or airborne. Each device typically includes all or any subset of:
[0133] Loudspeaker/s that typically yield omnidirectional or 360 coverage and typically work in sonic and/or ultrasonic frequencies.
[0134] At least 2 microphones that typically respond to or correspond to the loudspeakers' frequencies e.g. sonic and/or ultrasonic frequencies.
[0135] A power source aka PS; and
[0136] A processor such as an FPGA unit typically providing both processing power and memory. An FPGA is a field-programmable gate array which is an example of a device which may be configured by an end-user, customer or designer after manufacturing.
[0137] Each device or unit may have external interface/s. The device can be connected to other systems (such as C2 and/or display and/or other interested parties) e.g. via an API.
[0138] It is appreciated that more generally, any number of microphones and loudspeakers may be provided, however these typically are selected to provide omnidirectional or 360 degree coverage. Typically, each device can act as a receiver and transmitter, hence each device may be used as a repeater if a mesh network architecture is desired.
[0139] According to certain embodiments, each team member's unit or device stores (e.g. in the device's FPGA or other memory) data which is pre-configured or loaded to the system e.g. an indication of all N team members' unique signals, typically associated with the team member's name. It is appreciated that if each device (or a team leader's device) has this data regarding other devices configured in it, the device can, e.g. upon command and/or periodically, broadcast a localization request which all receiving devices are configured to acknowledge. Thus, if a device is missing or is found too far/too close/not in place etc.an alert can be given.
[0140] The device may also store initial locations of the various team members. The device may store topographic data. At least one device may also store a window of location info indicating where other team members were at various points in time e.g. where was team member 79, 1 minute ago, 2 minutes ago and 3 minutes ago. A table may be provided for storing the known times (which may be suitably staggered to prevent interference) or frequencies at which the other devices in the team respectively transmit their unique signals. Each table or indication may be loaded in-factory and may be pre-loaded by end-users.
[0141] Typically, all N devices are time-synchronized e.g. as described herein. Each of the devices typically transmits an acoustic signal (e.g. an acoustic signal unique to that device which differs from the acoustic signals being transmitted from all other devices), typically at a known time. Typically, the acoustic signal unique to device N is received by all of devices 1, . . . N-1 and similarly, typically, for all other unique acoustic signals which are similarly received by all other devices. The receiving device typically identifies the device which transmitted this unique acoustic signal, then computes the azimuth and distance of that transmitting device based on time and known topography. The above-referenced publication by Bianco, Gannot and Gerstoft describes a possible method for computing azimuth and distance of a transmitting unit based on time and known topography.
[0142] Each team member can be equipped with a device.
[0143] Prior to operations: all devices are typically mounted e.g. if wearable, by the team-members, and are turned on. Each device may be identified and found to be working and ready for operations.
[0144] During operations: [0145] a. Any spoken command is broadcasted and received by other devices. [0146] b. Each interested device U sends a location request, at least once, upon request or occasionally or periodically, say every 1 or 3 or 5 or 10 or 30 seconds, via the loudspeakers. Requests may be specific for a certain ability e.g. commands or localizations.
[0147] Each device d that receives this request responds with its unique signal at a sending time which is known (to the device d itself and typically to all or some other team members and/or is predetermined and/or is unique (vis-a-vis all other team members). The sending time typically comprises a time interval which is to elapse before sending, the time interval extending or starting from to the time that device d received the request signal. For example, the time using device d's clock may be 14:08 whereas device U's clock shows the time to be 17:06. Then, if device d receives a location request at 14:08:30, device d is configured to wait 2 seconds (by device d's clock) before sending its own (typically unique) ID. So device d may respond with its own ID at 14:08:32. device U may receive device d's signal ID and know (e.g. be pre-configured) to subtract the 2 seconds that he knows device D is configured to wait, and then compute the distance.
[0148] Each such returning signal is received by U and, since signals are unique per device, is identified by U as having been transmitted by a given device U T. U T's relative location e.g. relative to U, is determined by the interested device U. Should interested device U possess location knowledge e.g. as received by a GPS, then all relative locations can be transformed into absolute locations. It is appreciated that a device can interface with any suitable external geo location provider (such as, but not limited to, a GNSS or data given from radars), and thus provide geo locations.
[0149] Each device typically stores, in memory, the unique signals of each device in the set of team members, and therefore any device which fails to respond may be identified by comparing unique signals received to the stored unique signals and identifying stored signals, if any, which were not received. If a device fails to respond, or is found to be too far or too close, an alert is given e.g. to the human team member bearing the interested device.
[0150] Example: a and b are team members whose devices know they are not to be more than 200 meters away from one another. Each time one of a and b's devices lags behind the other, or takes a wrong turn which separates the 2 devices beyond 200 meters, the next location request may reveal this, and, responsively, members a and/or b can be alerted e.g. via their loudspeakers, that they are too far away from each other. For example, the team leader may periodically be informed that team member 1 is too far away.
[0151] It is appreciated that each device may include an FPGA or other storage which may be configured by end-users and not only, or not necessarily, in the factory. The FPGA may be used for repeatedly e.g. periodically sending commands, and/or for sampling and understanding sounds from the microphones and/or for identifying threats and/or location requests and/or commands and/or for correlating data with topographic data. Typically, each FPGA's configuration includes the unique ID signal and/or time delay and/or transmission frequency of each device, and/or the signal to send, and/or the topographic data.
[0152] A certain team member's device can be placed near a target or a destination and serve as a beacon, marking that location e.g. target or destination, for other devices or devices to home on. In this marking use-case, the system is typically operative for marking, typically without spoken commands, of: [0153] destinations where the team seeks to assemble, or [0154] targets which are of interest to the team, [0155] or a distress signal or backup request to other team members.
[0156] To do this, the team member's device (aka marker) typically sends, at least once, a predefined signal that other devices can home in.
[0157] The system herein may undergo certain configurations and/or calibrations in the factory, such as all or any subset of the following: [0158] a. The unique signal of each device may be configured in advance e.g. in the factory. [0159] b. The working frequencies may be configured in advance e.g. in the factory. [0160] c. Certain known commands may be identified typically independently of or in addition to or regardless of speech (such as STOP, TAKE COVER etc.). [0161] d. The between-member distance which triggers alerts (or any other parameter characterizing functionalities described herein) can be configured in advance e.g. in the factory. for example, before operations, devices may be configured to indicate that since distance between devices is not important, no alerts may be given due to devices being too far from one another. Or, devices may be configured to indicate that the maximum range between any 2 team members or a certain subset of team members, must not exceed, say, 200 meters. Then, during team operation, each time a device is about to exceed this distance limitation and/or each time a device actually does exceed the limitation, an alert can be given to that device or others (e.g. team member 6too far away). [0162] e. The number of devices and identification can be configured in advance e.g. in the factory. Each device can have a specific ID. Each device can transmit a specific signal that is unique only to that device and is not transmitted by any other team member, so that other devices, when they hear the signal, may know which team member it applies to.
[0163] It is appreciated that each device may be configured to have a name which the human team members associate with the human team member bearing that device, to ensure that alerts are user-friendly (e.g. Georgietoo far away rather than team member 6too far away).
[0164] Workflows may include all or any subset of the following:
[0165] Location Knowing
[0166] Each device can transmit a known and unique signal via the loudspeakers. For example, if a team has N members, N unique signals may be used. More generally, the signal transmitted by device x may be differentiated from the signal transmitted by device y using any suitable technology, e.g. differentiation according to time of transmission and/or differentiation according to frequency of transmission and/or or differentiation in the signal itself.
[0167] Typically, the signal is transmitted in the ultrasonic range so as not to be heard by people. The signal is received by the microphones in other devices and sent to the processor. Because the signal is unique, the ID of the device is known. By triangulation, the devices can identify the direction of the transmitting device. If the time of transmission is knownas can be achieved, say, by a 1 PPS signal. time synchronization between devices, or simply responding to an acoustic request by an interested device at a known timethen the distance of the transmitting device can be computed. In this manner, each interested device can know the relative location of each device. The process can be done automatically by the devices, and an alert may be provided each time a device is getting too far or is lost, thus freeing the team leader of the responsibility for monitoring for these eventualities.
[0168] A particular advantage of certain embodiments is that even if team members' clocks are totally out of sync, team member x's device can still determine where other devices are, by sending a location request signal to other devices, and determining the delay in receiving responses from various other devices by comparing the time the signal was sent, by x's own clock, to the time responses were received, again by x's own clock.
[0169] Further enhancing the accuracy and reliability of the system can be done by adding topographic data such as DTM or DSM files and cross-referencing the acoustic signal with them using conventional methods such as described in the above-referenced Bianco, Gannot, and Gerstoft publication.
[0170] typically including overcoming multipath, which may be present e.g. in an urban environment, by means of topographic data incorporation. According to certain embodiments, a team member device knows its own location and has topographic data. That device can be trained to understand how a sound located from each position is received. A device can thus be trained, and can then discern, which sound was received, and determine the location of that sound's source.
[0171] Communications
[0172] Each device can hear spoken commands of the device carrier (such as STOP, Move in <direction>, etc.) via the microphones. The device can transform the command to the ultrasonic frequencies, and amplify and transmit it via the loudspeakers.
[0173] The commands are received via the microphones in receiving devices and are transformed back to the sonic frequencies which can be heard by the receiving device carrier.
[0174] In this manner, spoken commands can reach each team member in the team, even if they are beyond speaking range. It is appreciated that an ultrasonic range, which is larger than a speaking range, e.g. an ultrasonic range of several hundred meters e.g., say, 200 or 300 or 400 or 500 meters, is achievable once the volume at which the device loudspeakers transmit and the sensitivity of the receivers or microphones are suitably selected as is known in the art e.g. as described here:
[0175] https://www.omnicalculator.com/physics/distance-attenuation
[0176] Example: for a given use-case, the devices may be designed such that Tx in the US is, say, above 100 SPL, and MIC sensitivity is, say, at least 60 dB.
[0177] Thus, team members need not stay within speaking range in order to exchange oral communications in natural language; instead they need only stay within ultrasonic range.
[0178] Specific commands or words can be pre-defined and distributed between devices, whether spoken or not (such as: Drone Alert, Obstacle Detected, etc.) this may enable extremely quick notifications and communications without the need of carrier intervention or acknowledgment.
[0179] Threat Identification
[0180] Threats or other phenomena with acoustic signatures (sound attributes characteristic only of a certain threat, such as a drone or animal or emergency vehicle siren or speeding car or other event or object (e.g. paintball gun) having an acoustic signature which may be known to the system) can be automatically detected by a device which can then alert the human carrier of the device that this threat is present. Typically, each device's FPGA has been pre-trained or embedded or equipped with logic or an algorithm configured to recognize certain threats having certain acoustic signatures, and is able to classify incoming sounds as being either indicative, or not indicative, of the pre-learned threats.
[0181] It is appreciated that phenomena need not necessarily be detected acoustically and may be detected by humans or using any suitable sensor. For example, given a team of hunters, an animal which is permitted by law for hunting may simply be detected, visually, by a human hunter. It is appreciated that the hunter may prefer not to raise his voice, so as not to scare off the animal, however embodiments herein allow the hunter to communicate the presence of the animal, either via a command or by low-volume natural speech which is communicated to afar ultrasonically, without calling out to other members of the hunting team.
[0182] The device may instantly identify a threat (or other team-relevant event which may also be positive for the team e.g. presence of running water) e.g. as described herein and may immediately communicate e.g. broadcast that event's presence, and typically its location, to other devices. If several devices identify a threat or other event with the same signature at the same time, the data from all devices identifying the threat are typically gathered or combined and may undergo triangulation, thereby to localize the threat and enhance confidence and accuracy, since the more devices triangulate a threat, the more accurate is the location of the threat as computed by the various devices which have identified or sensed the threat.
[0183] A method for locating a threat acoustically, e.g. via microphones, and computing direction is described in:
http://www.conforg.fr/cfadaga2004/master_cd/cd1/articles/000658.pdf [0184] the disclosure of which is hereby incorporated by reference.
[0185] Marking:
[0186] A device can aid in alerting to a target or desired location that can be stored in advance or decided on the move. For example, e.g. if topographical data and or absolute location (such as latitude/longitude) is knowna location can be marked and even navigated to. Navigation prompts may include beeps or spoken feedback and/or commands and/or help team members to mark points of interest on the move such as: [0187] 1. aim/look towards a location [0188] 2. caution regarding (moving) objects of interest e.g. fast-moving objects or perilous objects [0189] 3. mark targets' locations
[0190] Homing:
[0191] A device can serve as a Homing Device and homing functionality facilitates convergence of all devices to the location of that device. For example, a team is at a certain location, and wants another force to team up with. The device can broadcast a homing signal that other devices can get alerts to go to. Alerts can be in the form of spoken commands via the loudspeakers (right/left/forward, and/or a beeping sound which signals whether the device trying to come home is hot or cold e.g. by changing (in volume/frequency/intervals) as a monotonic function of the direction leading to the homing device, and/or changing (in volume/frequency/intervals) as a monotonic function of the distance from the homing device. In this manner, relevant team member/s can home conveniently because navigation to the homing device's location is provided, without sending coordinates or explanations.
[0192] Commands:
[0193] Commands can be spoken and/or may be generated automatically.
[0194] Commands like Stop/Take Cover/Deliver (Shoot) The Paintball (or deliver the pesticide or package or any other substance) can be spoken (sonic frequencies) to a device which may transmit them in ultra-sonic and high volume. A library of pre-recorded commands may be provided. Other devices may receive the command in the ultrasonic frequencies, transform back to sonic, and transmit via their loudspeakers, thereby to provide an oral command to device/s in the team. Known commands can be spoken, the device may understand them and send a preconfigured signal to other devices e.g. STOP may be heard, translated into a specific signal, and broadcasted. Other devices may hear the signal and may transmit the known signal via the loudspeakers (prerecorded, or just by beeping).
[0195] It is appreciated that some commands may be sent and responded to, automatically between the devices. For example, counting team members or performing a roll call or taking attendance may be automatic; each device may periodically, or on occasion, send a COUNT command. Responsively, each device may respond with its ID and thus each device's relative whereabouts can be determined. In this manner, if a specific device is too far/close/not in position, an alert can be sent.
[0196] A particular advantage of certain embodiments is that all or any subset of the following abilities may be provided in a single system: detection of positive events and/or threats, marking, homing, speech, commands, team member counting or performing a roll call or taking attendance. For example, threats in the sonic and ultrasonic domains, to team members' wellbeing or to the team's objective, may be heard, identified and localized.
[0197] Another advantage is that any embodiment of threat detection herein may be used to acoustically detect threats to wellbeing or to a team's objective, standalone or to cost-effectively and efficiently augment a radar (say)-based threat detection system e.g. to yield a system which has alternative threat detection capability in the event that the RF threat detector is functioning poorly, or not at all.
[0198] Chirp signals may be used as localization responses aka localization response signals, according to certain embodiments; more generally, any suitable pattern may be used for localization signals.
[0199] Localization may take into account topographical data. Any suitable technology may be used for topographical data-based localization of objects within a terrain whose topography is learned e.g. as described in https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/9130/eth-7853-01.pdf?sequence=1 or
https://pdfs.semanticscholar.org/9cf4/111c9e6d605a9a7ddef1213aed42d8a08b9b.pdf or in the above-referenced publication by Bianco, Gannot and Gerstoft.
[0200] For example, if the system is being used in an intercity area which has highways and throughways which include tunnel portions, then due to signals having been generated in a soundproof tunnel, a team member and his device who are adjacent the tunnel, might hear 2 signals ostensibly arriving from one or both of the tunnel's 2 ends. However, if an AI system stored in the device has been pre-trained with topographical data including the tunnel, the AI subsequently knows that the sound originated in the tunnel and is a single signal, rather than 2 signals.
[0201] Any suitable training data may be used e.g. DTM and/or DSM data.
[0202] Many variations are possible. For example, any embodiment herein may use any suitable conventional technology for source localization, to localize which threat or team member is the source of a given received signal.
[0203] Indoor and/or outdoor operation may be provided; the system may be configured e.g. as described herein for use on the moveno fixed location of Tx or Rx need be assumed or relied upon. Typically, problems which may hamper acoustic systems on the move such as multipath and echoes, which may occur, because when moving in built-up or complex terrains, the acoustic signal tends to bounce hence be changed, may be overcome e.g. by PRE-learning the topography of the region in which the team intends to operate.
[0204] Clock indifference may be provided since there is no need for common time between devices e.g. as described herein.
[0205] Many to many capability (all devices aware of all devices) may be provided. The system may have an ability to perform automated tasks other than location marking, homing, team counting (or performing a roll call or taking attendance), localizing and alerting for being azimuthally off-course, which are tasks described herein merely by way of example. For example, any device (or unit) in the system may have an ability to alert other devices of moving objects of interest such as a drone detected by one of the team members' devices. Optionally, the system may be configured to display data and/or tell data in form of beeps or vibrations or an external data interface. Optionally, the system may add or use data from external sensors such as GPS or temperature or humidity sensors. The system may provide the flexibility to configure devices as per required.
[0206] The system and methods herein have wide applicability, in land, air or sea, e.g. for any of the following use cases, separately or in combination: [0207] a. Fleets e.g. of vehicles or drones or personnel or robots or human service providers, which may be answering service calls from a public to be served, and may be competing with other fleets [0208] b. Games, which may be adversarial e.g. paintball, which require teams to move over terrain [0209] c. Sports e.g. mountain-climbing, cross-country skiing etc. [0210] d. Monitoring even stationary fleets of objects e.g. ascertaining that valuable museum exhibits are not being moved, trees are not being felled, etc. by treating each painting (say) as a team member and alarming if any team member's location, as derived from the localization response that team member sends, deviates from that team member's known location (e.g. painting x is known to be hung in a certain location in a certain room within the museum). [0211] e. Preventing theft of animals by treating each animal in a herd as a team member and providing an alert to remote law enforcement personnel, if any team member's location, as derived from the localization response that the team member sends, deviates from the known location of the herd. [0212] f. Crew or team management e.g. for crews of construction workers or health workers or mining workers, who may be working in an area in which threats (say: a collapsing structure) to the crew's wellbeing and/or objective, may occur. [0213] g. hunting teams [0214] h. any team whose members sometimes need to rapidly (e.g. by an oral call, perhaps in natural language) gain the attention of one or some or all other members of the team. [0215] i. any team whose members sometimes need (e.g. by an oral call, perhaps in natural language) to gain the attention of one or some or all other members of the team, in a discrete manner e.g. without calling out loudly to other team members e.g. because the content of the call is confidential due to privacy laws for example. For example, even within a hospital, unexpected emergencies occur and it is sometimes desirable to immediately summon nearby personnel, preferably without disclosing, to members of the public within earshot, confidential information regarding any patient.
[0216] It is appreciated that terminology such as mandatory, required, need and must refer to implementation choices made within the context of a particular implementation or application described herewithin for clarity and are not intended to be limiting, since, in an alternative implementation, the same elements might be defined as not mandatory and not required, or might even be eliminated altogether.
[0217] Components described herein as software may, alternatively, be implemented wholly or partly in hardware and/or firmware, if desired, using conventional techniques, and vice-versa. Each module or component or processor may be centralized in a single physical location or physical device, or distributed over several physical locations or physical devices.
[0218] Included in the scope of the present disclosure, inter alia, are electromagnetic signals in accordance with the description herein. These may carry computer-readable instructions for performing any or all of the operations of any of the methods shown and described herein, in any suitable order including simultaneous performance of suitable groups of operations as appropriate. Included in the scope of the present disclosure, inter alia, are machine-readable instructions for performing any or all of the operations of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the operations of any of the methods shown and described herein, in any suitable order i.e. not necessarily as shown, including performing various operations in parallel or concurrently rather than sequentially as shown; a computer program product comprising a computer useable medium having computer readable program code, such as executable code, having embodied therein, and/or including computer readable program code for performing, any or all of the operations of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the operations of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the operations of any of the methods shown and described herein, in any suitable order; electronic devices each including at least one processor and/or cooperating input device and/or output device and operative to perform e.g. in software any operations shown and described herein; information storage devices or physical records, such as disks or hard drives, causing at least one computer or other device to be configured so as to carry out any or all of the operations of any of the methods shown and described herein, in any suitable order; at least one program pre-stored e.g. in memory or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the operations of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server/s and/or client/s for using such; at least one processor configured to perform any combination of the described operations or to execute any combination of the described modules; and hardware which performs any or all of the operations of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software. Any computer-readable or machine-readable media described herein is intended to include non-transitory computer- or machine-readable media.
[0219] Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any operation or functionality described herein may be wholly or partially computer-implemented e.g. by one or more processors. The invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally including at least one of a decision, an action, a product, a service or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution.
[0220] The system may, if desired, be implemented as a web-based system employing software, computers, routers and telecommunications equipment, as appropriate.
[0221] Any suitable deployment may be employed to provide functionalities e.g. software functionalities shown and described herein. For example, a server may store certain applications, for download to clients, which are executed at the client side, the server side serving only as a storehouse. Any or all functionalities e.g. software functionalities shown and described herein may be deployed in a cloud environment. Clients e.g. mobile communication devices, such as smartphones, may be operatively associated with, but external to the cloud.
[0222] The scope of the present invention is not limited to structures and functions specifically described herein and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function.
[0223] Any if-then logic described herein is intended to include embodiments in which a processor is programmed to repeatedly determine whether condition x, which is sometimes true and sometimes false, is currently true or false and to perform y each time x is determined to be true, thereby to yield a processor which performs y at least once, typically on an if and only if basis e.g. triggered only by determinations that x is true, and never by determinations that x is false.
[0224] Any determination of a state or condition described herein, and/or other data generated herein, may be harnessed for any suitable technical effect. For example, the determination may be transmitted or fed to any suitable hardware, firmware or software module, which is known or which is described herein to have capabilities to perform a technical operation responsive to the state or condition. The technical operation may for example comprise changing the state or condition, or may more generally cause any outcome which is technically advantageous given the state or condition or data, and/or may prevent at least one outcome which is disadvantageous given the state or condition or data. Alternatively or in addition, an alert may be provided to an appropriate human operator or to an appropriate external system.
[0225] Features of the present invention, including operations which are described in the context of separate embodiments, may also be provided in combination in a single embodiment. For example, a system embodiment is intended to include a corresponding process embodiment, and vice versa. Also, each system embodiment is intended to include a server-centered view or client centered view, or view from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node. Features may also be combined with features known in the art and particularly although not limited to those described in the Background section or in publications mentioned therein.
[0226] Conversely, features of the invention, including operations, which are described for brevity in the context of a single embodiment or in a certain order may be provided separately or in any suitable subcombination, including with features known in the art (particularly although not limited to those described in the Background section or in publications mentioned therein) or in a different order. e.g. is used herein in the sense of a specific example which is not intended to be limiting. Each method may comprise all or any subset of the operations illustrated or described, suitably ordered e.g. as illustrated or described herein.
[0227] Devices, apparatus or systems shown coupled in any of the drawings may in fact be integrated into a single platform in certain embodiments or may be coupled via any appropriate wired or wireless coupling such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, Smart Phone (e.g. iPhone), Tablet, Laptop, PDA, Blackberry GPRS, Satellite including GPS, or other mobile delivery. It is appreciated that in the description and drawings shown and described herein, functionalities described or illustrated as systems and sub-units thereof can also be provided as methods and operations therewithin, and functionalities described or illustrated as methods and operations therewithin can also be provided as systems and sub-units thereof. The scale used to illustrate various elements in the drawings is merely exemplary and/or appropriate for clarity of presentation and is not intended to be limiting.
[0228] Any suitable communication may be employed between separate units herein e.g. wired data communication and/or in short-range radio communication with sensors such as cameras e.g. via WiFi, Bluetooth or Zigbee.
[0229] Any processing functionality illustrated (or described herein) may be executed by any device having a processor, such as but not limited to a mobile telephone, set-top-box, TV, remote desktop computer, game console, tablet, mobile e.g. laptop or other computer terminal, embedded remote unit, which may either be networked itself (may itself be a node in a conventional communication network e.g.) or may be conventionally tethered to a networked device (to a device which is a node in a conventional communication network or is tethered directly or indirectly/ultimately to such a node).
[0230] Any operation or characteristic described herein may be performed by another actor outside the scope of the patent application and the description is intended to include an apparatus, whether hardware, firmware or software, which is configured to perform, enable or facilitate that operation, or to enable, facilitate or provide that characteristic.
[0231] The terms processor or controller or module or logic as used herein are intended to include hardware such as computer microprocessors or hardware processors, which typically have digital memory and processing capacity, such as those available from, say Intel and Advanced Micro Devices (AMD). Any operation or functionality or computation or logic described herein may be implemented entirely or in any part on any suitable circuitry including any such computer microprocessor/s as well as in firmware or in hardware or any combination thereof.
[0232] It is appreciated that elements illustrated in more than one drawings, and/or elements in the written description may still be combined into a single embodiment, except if otherwise specifically clarified herewithin. Any of the systems shown and described herein may be used to implement or may be combined with, any of the operations or methods shown and described herein.
[0233] It is appreciated that any features, properties, logic, modules, blocks, operations or functionalities described herein which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment, except where the specification or general knowledge specifically indicates that certain teachings are mutually contradictory and cannot be combined. Any of the systems shown and described herein may be used to implement or may be combined with, any of the operations or methods shown and described herein.
[0234] Conversely, any modules, blocks, operations or functionalities described herein, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination, including with features known in the art.
[0235] Each element, e.g. operation described herein, may have all characteristics and attributes described or illustrated herein, or, according to other embodiments, may have any subset of the characteristics or attributes described herein.