G06F16/58

METHODS AND SYSTEMS FOR DISAMBIGUATING USER INPUT BASED ON DETECTION OF ENSEMBLES OF ITEMS
20220121707 · 2022-04-21 ·

Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.

METHODS AND SYSTEMS FOR DISAMBIGUATING USER INPUT BASED ON DETECTION OF ENSEMBLES OF ITEMS
20220121707 · 2022-04-21 ·

Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.

SYSTEM FOR MULTI-TAGGING IMAGES
20220121706 · 2022-04-21 ·

A system with a simple, intuitive, efficient interface is described for creating multi-tagged image files and playing back the tags upon demand. The system includes a display for displaying the image to a user, a user interface is adapted to receive user input to create a user-selectable zone around each selected location, a recording device for creating an object associated with each user-selectable zone and a packing device that merges the image, the user-selectable zones and their associated objects into a tagged image file having a unique filename extension indicating that it is a tagged image file, and saving the tagged image. On playback, the image is displayed to the user who may select a user-selectable zone. The object file associated with that zone is played back. The user may also select an option that causes the objects to autoplay in a pre-determined sequence. The user may also delete, edit, or re-record objects.

SYSTEM FOR MULTI-TAGGING IMAGES
20220121706 · 2022-04-21 ·

A system with a simple, intuitive, efficient interface is described for creating multi-tagged image files and playing back the tags upon demand. The system includes a display for displaying the image to a user, a user interface is adapted to receive user input to create a user-selectable zone around each selected location, a recording device for creating an object associated with each user-selectable zone and a packing device that merges the image, the user-selectable zones and their associated objects into a tagged image file having a unique filename extension indicating that it is a tagged image file, and saving the tagged image. On playback, the image is displayed to the user who may select a user-selectable zone. The object file associated with that zone is played back. The user may also select an option that causes the objects to autoplay in a pre-determined sequence. The user may also delete, edit, or re-record objects.

Parallel data access method and system for massive remote-sensing images

A parallel data access method for massive remote-sensing images includes: 1) segmenting a remote-sensing image to be processed using a set grid system, data in each grid corresponding to a data block; 2) collecting a data access log of an underlying distributed object storage system Ceph in a past period of time, and measuring a load index of each Ceph cluster and a load index of each pool; 3) selecting a pool with a minimum load in a Ceph cluster with a minimum current load to serve as a storage position of a current data block, and writing each data block into a corresponding pool; 4) returning a data identifier dataid and a data access path of the remote-sensing image; and 5) storing metadata of each data block in a metadata database. The method can support rapid and high-concurrency read and write of large-area data of a grid data block.

STRUCTURE DIAGNOSTIC CASE PRESENTATION DEVICE, METHOD, AND PROGRAM
20230245296 · 2023-08-03 · ·

Provided are a structure diagnostic case presentation device, method, and program capable of performing support such that a diagnostician can make a more appropriate diagnosis in diagnosing damage of a target structure to be diagnosed. In a case where information (damage information) regarding the target structure to be diagnosed is acquired by a first information acquisition unit (20), a similar damage extraction unit (22-1) extracts similar damage similar to the damage of the target structure from a database (12) based on the acquired damage information. A specific diagnostic case extraction unit (24-1) extracts, as a specific diagnostic case, a diagnostic result that is a diagnostic result of a structure having the extracted similar damage and is a diagnostic result of a structure having different diagnostic results at two points in time or more, from the database (12). Information related to the extracted specific diagnostic case is displayed on a display unit (14).

TYPE AHEAD SEARCH AMELIORATION BASED ON IMAGE PROCESSING

System and methods for type ahead search amelioration based on image processing are provided. In embodiments, a method includes: capturing, by a computing device, image data based on images viewed by a user during a computing session; converting, by the computing device, the image data to text using image processing; and storing, by the computing device, the text in a temporary buffer of a type ahead search function, wherein the text constitutes image context data for use by the type ahead search function.

A Method and A System for Processing an Image and for Generating a Contextually Coherent Video Based on Images Processed Thereby
20220122268 · 2022-04-21 ·

The invention provides a computer-implemented method of processing an image, the method comprising the steps of: identifying objects in the image using a content analysis engine; based on information from a relationship database, using a contextual analysis engine to automatically: identify contextual relationships between some or all of the identified objects in the image, derive a background or landscape of said image from one or more of said identified objects; and subsequently, and analyse the identified contextual relationships between the identified objects and the derived background or landscape to thereby associate one or more contexts with said image.

Digital image tagging apparatuses, systems, and methods

In an exemplary embodiment, user input is received, a selected portion of a digital image is identified based on the user input, a data instance is selected, and a tag is applied to the selected portion of the digital image. The applied tag provides an association between the selected portion of the digital image and the data instance. In certain examples, a visual indicator representative of the tag is provided for display together with the tagged digital image.

Digital image tagging apparatuses, systems, and methods

In an exemplary embodiment, user input is received, a selected portion of a digital image is identified based on the user input, a data instance is selected, and a tag is applied to the selected portion of the digital image. The applied tag provides an association between the selected portion of the digital image and the data instance. In certain examples, a visual indicator representative of the tag is provided for display together with the tagged digital image.