G06V10/424

Structured Knowledge Modeling, Extraction and Localization from Images

Techniques and systems are described to model and extract knowledge from images. A digital medium environment is configured to learn and use a model to compute a descriptive summarization of an input image automatically and without user intervention. Training data is obtained to train a model using machine learning in order to generate a structured image representation that serves as the descriptive summarization of an input image. The images and associated text are processed to extract structured semantic knowledge from the text, which is then associated with the images. The structured semantic knowledge is processed along with corresponding images to train a model using machine learning such that the model describes a relationship between text features within the structured semantic knowledge. Once the model is learned, the model is usable to process input images to generate a structured image representation of the image.

Action based activity determination system and method

A processor implemented system and method for identification of an activity performed by a subject based on sensor data analysis is described herein. In an implementation, the method includes capturing movements of the subject in real-time using a sensing device. At least one action associated with the subject is ascertained from a predefined set of actions. From the predefined set of actions, a plurality of actions can collectively form at least one activity. The ascertaining is based on captured movements of the subject and at least one predefined action rule. The at least one action rule is based on context-free grammar (CFG) and is indicative of a sequence of actions for occurrence of the at least one activity. Further, a current activity performed by the subject is dynamically determined, based on the at least one action and an immediately preceding activity, using a non-deterministic push-down automata (NPDA) state machine.

Method and system for the automatic analysis of an image of a biological sample

Method for the automatic analysis of an image of a biological sample with respect to a pathological relevance, wherein a)local features of the image are aggregated to a global feature of the image using a bag of visual word approach, b) step a) is repeated at least two times using different methods resulting in at least two bag of word feature datasets, c) computation of at least two similarity measures using the bag of word features obtained from a training image dataset and bag of word features from the image, d) the image training dataset comprising a set of visual words, classifier parameters, including kernel weights and bag of word features from the training images, e) the computation of the at least two similarity measures is subject to an adaptive computation of kernel normalization parameters and/or kernel width parameters, f) for each image one score is computed depending on the classifier parameters and kernel weights and the at least two similarity measures, the at least one score being a measure of the certainty of one pathological category compared to the image training dataset, g) for each pixel of the image a pixel-wise score is computed using the classifier parameters, the kernel weights, the at least two similarity measures, the bag of word features of the image, all the local features used in the computation of the bag of word features of the image and the pixels used in the computations of the local features, h) the pixel-wise score is stored as a heatmap dataset linking the pixels of the image to the pixel-wise scores.

Machine-learning behavioral analysis to detect device theft and unauthorized device usage

The disclosure relates to machine-learning behavioral analysis to detect device theft and unauthorized device usage. In particular, during a training phase, an electronic device may generate a local user profile that represents observed user-specific behaviors according to a centroid sequence, wherein the local user profile may be classified into a baseline profile model that represents aggregate behaviors associated with various users over time. Accordingly, during an authentication phase, the electronic device may generate a current user profile model comprising a centroid sequence re-expressing user-specific behaviors observed over an authentication interval, wherein the current user profile model may be compared to plural baseline profile models to identify the baseline profile model closest to the current user profile model. As such, an operator change may be detected where the baseline profile model closest to the current user profile model differs from the baseline profile model in which the electronic device has membership.

Exploration and production document content and metadata scanner

A method involves extracting, from a file comprising an unstructured oilfield document, terms, calculating term frequency inverse document frequency (TF-IDF) of the terms to generate an input vector, execute a document content classification model on the input vector to generate a document content classification of unstructured oilfield document, and extract table information from a table in the unstructured oilfield document. The method further involves storing, with the file in storage, the document content classification and the table information.