An advertising system, comprising: a two-wheeled vehicle; a display mounted on the vehicle to show advertisements; and a processor coupled to the display, wherein the processor captures data associated with showing an advertisement on the display, wherein the data includes one or more of the following: advertising parameters from the advertiser; mobile location; mobile distance from a landmark; advertising categories; mobile location demographics; pricing of advertised goods or services; time and date; speed and direction of the advertising display; traffic characteristics associated with the display; views of the display and advertising budget characteristics.
A system includes a cellular transceiver to communicate with a predetermined target; one or more antennas coupled to the 5G transceiver each electrically or mechanically steerable to the predetermined target; a processor to control a directionality of the one or more antennas in communication with the predetermined target; and an edge processing module coupled to the processor and the one or more antennas to provide low-latency computation for the predetermined target.
Smart car method to navigate a road includes detecting one or more objects using a camera and a sensor to delimit boundaries of a road; creating a 3D model based on outputs of the camera and sensor; and navigating the road with a vehicle.
A hearing system includes an eye tracking module to detect a sound region of interest; one or more microphone arrays coupled to the eye tracking module and focused on the detected sound region of interest; and one or more amplifiers wirelessly coupled to the one or more microphone arrays and to render sound from the sound region of interest for one or more ears.
A method for transporting people by providing a vehicle with a cab having a moveable actuator coupled to the propulsion unit to move the propulsion unit between a first position above the cab during lift-off and a second position during lateral flight; determining a hand control gesture as captured by a plurality of cameras or sensors in the vehicle, wherein a sequence of finger, palm or hand movements represents a vehicle control request; and determining vehicle control options based on the model, a current state of the vehicle and the environment of the vehicle.
Smart car method to navigate a road includes detecting road-pavement markings using a camera and a sensor; creating a 3D model based on outputs of the camera and sensor; and navigating the road with a vehicle.
Smart car operations are detailed. The system uses neural network and population statistics to determine whether a vehicle operation is reasonably safe operation of the autonomous vehicle to guide the operation of cars as a group.
Smart car method for autonomous navigation by creating a 3D model based on outputs of the camera and sensor; accessing a high definition map database and generating a trip with travel segments from origin to destination; detecting a freeway entrance or an exit lane based on a road marking using a camera and a sensor; if the travel segment passes the freeway entrance or exit, then follow the current lane without exiting; and otherwise following the freeway entrance or exit.
A system includes a mobile device having one or more cameras to take images; a sensor detecting reflected light from one or more lasers and a diffuser to detect object range or dimension; code for motion tracking, environmental understanding by detecting planes in an environment, and estimating light and dimensions of the surrounding based on the one or more lasers; code to estimate a three-dimensional (3D) volume of an object from multiple perspectives and from projected laser beams to measure size or scale and determine locations of points on the object's surface in a plane or a slice using time-of-flight, wherein positions and cross-sections for different slices are correlated to construct a 3D model of the object, including object position and shape; the device receiving user request to select a content from one or more augmented, virtual, or extended reality contents and rendering a reality view of the environment.
A method and system for providing live entertainment and teaching using interactive video and audio over an open internet connection to multiple viewers. The entertaining talent-person or teacher is located in various places and locations. In each location the talent-person talks about a particular subject such as cooking, travel, physics, farming, music, politics, or baseball and is broadcasting live audio and video of the talent-person through a camera and microphone connected to a computer, which is connected to the internet. The viewers see and hear the talent live by watching a web page on the viewers' devices connected to the internet. The viewers are able to comment and ask questions to the talent-person through chat input available on the web site. The talent-person to reads the viewer's comments and questions and responds to the question.