Patent classifications
G06F16/58
Methods, systems, and products for recalling and retrieving documentary evidence
Methods, systems, and products help users recall memories and search for content of those memories. When a user cannot recall a memory, the user is prompted with questions to help recall the memory. As the user answers the questions, a virtual recollection of the memory is synthesized from the answers to the questions. When the user is satisfied with the virtual recollection of the memory, a database of content may be searched for the virtual recollection of the memory. Video data, for example, may be retrieved that matches the virtual recollection of the memory. The video data is thus historical data documenting past events.
Methods, systems, and media for displaying information related to displayed content upon detection of user attention
Methods, systems, and media for displaying information related to displayed content upon detection of user attention are provided. In some implementations, a method for presenting information to a user is provided, the method comprising: detecting a presence of a user; retrieving content and associated content metadata; causing the content to be presented to the user in response to detecting the presence of the user; detecting a user action indicative of user attention to at least a portion of the content presented to the user; and in response to detecting the user action, causing information to be presented to the user, wherein the information presented to the user corresponds to the content metadata associated with the portion of the content.
Method, apparatus, server and storage medium for image retrieval
Embodiments of the present disclosure disclose a method, apparatus, server and storage medium for image retrieval. The method includes: identifying a plurality of groups of images having identical contents from images on all webpages; aggregating, for each image group, image-related texts on all source webpages of each image to obtain text descriptions of each image group; establishing an inverted index for each image in the image groups based on the text descriptions of the image group, the inverted index at least including, for each text description, source webpages corresponding to all text descriptions of the image group of the text description; and performing image retrieval based on an inputted query and the inverted index.
Method, apparatus, server and storage medium for image retrieval
Embodiments of the present disclosure disclose a method, apparatus, server and storage medium for image retrieval. The method includes: identifying a plurality of groups of images having identical contents from images on all webpages; aggregating, for each image group, image-related texts on all source webpages of each image to obtain text descriptions of each image group; establishing an inverted index for each image in the image groups based on the text descriptions of the image group, the inverted index at least including, for each text description, source webpages corresponding to all text descriptions of the image group of the text description; and performing image retrieval based on an inputted query and the inverted index.
Methods and systems for disambiguating user input based on detection of ensembles of items
Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.
Methods and systems for disambiguating user input based on detection of ensembles of items
Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.
System and method for dynamic thresholding for multiple result image cross correlation
The present disclosure relates to a computer-implemented system and method for finding matching occurrences of an item of interest (or image or sub-image) within a document (or larger image) via cross correlation and setting a dynamic threshold for each document (or larger image). The described system and method are capable of matching and locating the one or more items of interest within each specific document (or larger image).
System and method for dynamic thresholding for multiple result image cross correlation
The present disclosure relates to a computer-implemented system and method for finding matching occurrences of an item of interest (or image or sub-image) within a document (or larger image) via cross correlation and setting a dynamic threshold for each document (or larger image). The described system and method are capable of matching and locating the one or more items of interest within each specific document (or larger image).
Artificial intelligence-based generation of sequencing metadata
The technology disclosed uses neural networks to determine analyte metadata by (i) processing input image data derived from a sequence of image sets through a neural network and generating an alternative representation of the input image data, the input image data has an array of units that depicts analytes and their surrounding background, (ii) processing the alternative representation through an output layer and generating an output value for each unit in the array, (iii) thresholding output values of the units and classifying a first subset of the units as background units depicting the surrounding background, and (iv) locating peaks in the output values of the units and classifying a second subset of the units as center units containing centers of the analytes.
Aligning unlabeled images to surrounding text
Aspects of the present invention disclose a method for extracting information of an unlabeled image within a document and aligning the information to text of the document. The method includes one or more processors identifying an image that is not associated with a corresponding label in a document that includes text. The method further includes determining a feature of an object of the image. The method further includes identifying an alignment candidate of the text of the document based at least in part on the feature of the object, wherein the alignment candidate is a segment of the text of the document identified as corresponding to the feature of the object. The method further includes aligning the feature with the alignment candidate of the text of the document.