Patent classifications
G06F16/58
Generic card feature extraction based on card rendering as an image
Methods and apparatus for using features of images representing content items to improve the presentation of the content items are disclosed. In one embodiment, a plurality of digital images are obtained, where each of the images represents a corresponding one of a plurality of content items Image features of each of the digital images are determined. Additional features including at least one of user features pertaining to a user of a client device or contextual features pertaining to the client device are ascertained. At least a portion of the content items are provided via a network to the client device using features that include or are derived from both the image features of each of the plurality of digital images and the additional features.
Generic card feature extraction based on card rendering as an image
Methods and apparatus for using features of images representing content items to improve the presentation of the content items are disclosed. In one embodiment, a plurality of digital images are obtained, where each of the images represents a corresponding one of a plurality of content items Image features of each of the digital images are determined. Additional features including at least one of user features pertaining to a user of a client device or contextual features pertaining to the client device are ascertained. At least a portion of the content items are provided via a network to the client device using features that include or are derived from both the image features of each of the plurality of digital images and the additional features.
Systems, methods, and apparatus for providing image shortcuts for an assistant application
Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
System and method for sending and rendering an image by a device based on receiver's context
A system and method for sending an image to a user device based on the context of a user of the device are provided. An image to be sent to a user device may be obtained. The context of the user may be determined. The image may be analyzed to detect and prioritize objects in the image based on the context of the user. The image may be encoded such that objects are rendered on the user device in an order based on the prioritization. The encoded image may be sent to the user device.
Methods and systems for disambiguating user input based on detection of ensembles of items
Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.
Methods and systems for disambiguating user input based on detection of ensembles of items
Systems and methods are described for disambiguating user input based on a physical location of items in a vicinity of a user. The system determines that a query received from a user contains an ambiguity. In response, the system identifies several items in the physical vicinity of the user. Then, the system analyzes the identified plurality of items to determine whether the plurality of items forms a first ensemble of items or a second ensemble of items. If the plurality of items forms a first ensemble of items, the system performs a search using the search query and a first keyword related to the first ensemble of items. If the plurality of items forms a second ensemble of items, the system performs a search using the search query and a second keyword related to the second ensemble of items. The system then outputs results of the performed search.
Aggregating product shortage information
A system for reducing product shortage durations in retail stores based on analysis of image data is provided. The system may comprise: a communication interface configured to receive image data from retail stores indicative of a product shortage of a product type relative to information describing a placement of products of a product type on a store shelf; and at least one processor configured to: analyze the image data to detect occurrences of product shortages of the product type in the retail stores and determine durations associated with the occurrences; identify a common factor contributing to the duration of part of the occurrences of the product shortages; determine an action, associated with the at least one common factor, for potentially reducing product shortage durations of future shortages of the product type in the retail stores; and provide information associated with the identified action to an entity.
Automated visual suggestion, generation, and assessment using computer vision detection
An online system may identify content with which a user has an interest. For example, the online system may determine that a user has an interest in the content based on interaction information indicating that the user interacted with the content. In a particular example, the online system may identify image concepts included in the content based on computer vision techniques that recognize the image concepts. The online system may model probabilities that image concepts will appeal to users. Based on the modeled probabilities, the online system may automatically recommend image concepts for inclusion in candidate images, automatically generate candidate images, or assess candidate images to determine a probability of user interaction with the assessed candidate images.
Method, apparatus, and system for data collection, transformation and extraction to support image and text search of antiques and collectables
Generating a knowledge base in a database, the knowledge base including a first field which specifies a plurality of known brands of a plurality of known objects, a second field which specifies a plurality of known categories corresponding to the plurality of known objects, and a third field which specifies a plurality of sets of known image-based parameters of the plurality of known objects; receiving in one or more computer memories an indication of a brand, an indication of a category, and an image-based description parameter for a particular object; comparing, the indications of the brand, the category, and the image-based description parameter for the particular object with one or more of the plurality of known brands, known categories, and sets of known image based parameters, respectively, and providing an indication of whether the particular object is one or more of the plurality of known objects, based, on the comparisons.
Method, system, and non-transitory computer readable record medium for providing comparison result by comparing common features of products
Provided are a method, a system, and a non-transitory computer-readable record medium for comparing common features of products and providing a comparison result. A product comparison method includes recognizing at least two comparable products from at least one image; displaying at least one common attribute of the at least two comparable products through a user interface; and based on the user interface receiving a user input that selects one of the at least one common attribute, as a selected attribute, providing a result of comparison between the at least two comparable products with regard to the selected attribute.