Patent classifications
G06F16/58
Method, system, and computer program product for product identification using sensory input
A first type of data regarding an unidentified product is collected. A first type of analysis on the first type of data is performed. A second type of data regarding an unidentified product is collected. A second type of analysis on the second type of data is performed. Based upon the first type of analysis on the first type of data and the second type of analysis on the second type of data, product identification for the unidentified product is performed. Based on the product identification, an identity of the unidentified product is output. A user is enabled to perform a business interaction with a product matching the identity of the unidentified product.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE CAPTURING APPARATUS, AND STORAGE MEDIUM
An image processing apparatus comprises a generation unit configured to generate an image file of captured image data, the generation unit generating the image file with estimation results related to the image data added thereto as metadata, wherein the generation unit generates the metadata so that a first estimation result and a second estimation result are distinguishable from each other, the first estimation result being based on data that is included in the image file, the second estimation result being based on data that is not included in the image file.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE CAPTURING APPARATUS, AND STORAGE MEDIUM
An image processing apparatus comprises a generation unit configured to generate an image file of captured image data, the generation unit generating the image file with estimation results related to the image data added thereto as metadata, wherein the generation unit generates the metadata so that a first estimation result and a second estimation result are distinguishable from each other, the first estimation result being based on data that is included in the image file, the second estimation result being based on data that is not included in the image file.
BROWSING IMAGES VIA MINED HYPERLINKED TEXT SNIPPETS
Images stored in an information repository are prepared for browsing. For each image in the repository, text in the repository is mined to extract snippets of text about the image which are semantically relevant to the image, and for each of these snippets of text, keyterms are detected in the snippet of text which represent either concepts that are related to the image or entities that are related to the image, and the snippet of text and keyterms are associated with the image. Each keyterm that is associated with each image in the repository is hyperlinked to each other image in the repository that has this keyterm associated therewith. A graphical user interface allows a user to browse the images in the repository by using their associated snippets of text and hyperlinked keyterms.
BROWSING IMAGES VIA MINED HYPERLINKED TEXT SNIPPETS
Images stored in an information repository are prepared for browsing. For each image in the repository, text in the repository is mined to extract snippets of text about the image which are semantically relevant to the image, and for each of these snippets of text, keyterms are detected in the snippet of text which represent either concepts that are related to the image or entities that are related to the image, and the snippet of text and keyterms are associated with the image. Each keyterm that is associated with each image in the repository is hyperlinked to each other image in the repository that has this keyterm associated therewith. A graphical user interface allows a user to browse the images in the repository by using their associated snippets of text and hyperlinked keyterms.
UNMANNED AIRCRAFT STRUCTURE EVALUATION SYSTEM AND METHOD
Computerized systems and methods are disclosed, including a computer system that executes software that may receive a geographic location having one or more coordinates of a structure, receive a validation of the structure location, and generate unmanned aircraft information based on the one or more coordinates of the validated location. The unmanned aircraft information may include an offset from the walls of the structure to direct an unmanned aircraft to fly an autonomous flight path offset from the walls, and camera control information to direct a camera of the unmanned aircraft to capture images of the walls at a predetermined time interval while the unmanned aircraft is flying the flight path. The computer system may receive images of the walls captured by the camera while the unmanned aircraft is flying the autonomous flight path and generate a structure report based at least in part on the images.
SYSTEMS AND METHODS FOR SCREENSHOT LINKING
Systems and methods of the present disclosure are directed to analyzing screenshots A system can include a computing device including a processor coupled to a memory and a display screen configured to display content. The system can include an application stored on the memory and executable by the processor. The application can include a screenshot receiver configured to access, from storage to which a screenshot of the content displayed on the display screen captured using a screenshot function of the computing device is stored, the screenshot including an image and a predetermined marker. The application can include a marker detector configured to detect the predetermined marker included in the screenshot. The application can include a link identifier configured to identify, using the predetermined marker, a link to a resource mapped to the image included in the screenshot, the resource accessible by the computing device via the link.
DISTRIBUTED IMAGE ANALYSIS METHOD AND SYSTEM, AND STORAGE MEDIUM
A distributed image analysis method performed by a distributed image analysis system comprising a plurality of first servers and a second server. The distributed image analysis method includes: obtaining, by each of the plurality of first servers, a result set through image collision analysis, where the result set includes an index image that records a result object, the index image corresponds to an object frequency; separately sending, by each of the plurality of first servers, a result set to the second server; performing, by the second server, feature extraction on the index image in each of the result sets received from the plurality of first servers, to obtain a feature value of the index image; performing, by the second server, image collision analysis on the extracted feature value of the index image in each of the result sets, to obtain a confidence of the index image; and when determining that a confidence between the index images in the result sets received from the plurality of first servers is greater than or equal to a preset value, obtaining, by the second server, a sum of object frequencies corresponding to the index images.
Application development environment for biological sample assessment processing
A system and method for developing applications (Apps) for automated assessment and analysis of processed biological samples. Such samples are obtained, combined with nutrient media and incubated. The incubated samples are imaged and the image information is classified according to predetermined criteria. The classified image information is then evaluated according to Apps derived from classified historical image information in a data base. The classified historical image information is compared with the classified image information to provide guidance on further processing of the biological sample through Apps tailored to process provide sample process guidance tailored to the classifications assigned to the image information.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
The information processing system according to the present invention includes: a first selection unit for selecting two or more images from a first data set that includes learning data including an image, a label associated with the image, and auxiliary information; a second selection unit for selecting an image from a second data set including learning data different from the learning data included in the first data set, based on positions in a feature space of the two or more images selected by the first selection unit; and a learning unit for learning a model for estimating a label based on the auxiliary information using the learning data included in the first data set and the learning data corresponding to the image selected by the second selection unit.