Patent classifications
G06V10/76
TECHNIQUES FOR DERIVING AND/OR LEVERAGING APPLICATION-CENTRIC MODEL METRIC
Techniques for quantifying accuracy of a prediction model that has been trained on a data set parameterized by multiple features are provided. The model performs in accordance with a theoretical performance manifold over an intractable input space in connection with the features. A determination is made as to which of the features are strongly correlated with performance of the model. Based on the features determined to be strongly correlated with performance of the model, parameterized sub-models are created such that, in aggregate, they approximate the intractable input space. Prototype exemplars are generated for each of the created sub-models, with the prototype exemplars for each created sub-model being objects to which the model can be applied to result in a match with the respective sub-model. The accuracy of the model is quantified using the generated prototype exemplars. A recommendation engine is provided for when there are particular areas of interest.
Methods and systems for real-time data reduction
A computing system for decimating video data includes: a processor; a persistent storage system coupled to the processor; and memory storing instructions that, when executed by the processor, cause the processor to decimate a batch of frames of video data by: receiving the batch of frames of video data; mapping, by a feature extractor, the frames of the batch to corresponding feature vectors in a feature space, each of the feature vectors having a lower dimension than a corresponding one of the frames of the batch; selecting a set of dissimilar frames from the plurality of frames of video data based on dissimilarities between corresponding ones of the feature vectors; and storing the selected set of dissimilar frames in the persistent storage system, the size of the selected set of dissimilar frames being smaller than the number of frames in the batch of frames of video data.
Image processing method and apparatus
A method of image processing converts images of different brightness to a common brightness range, the images representing at least partly the same scene. One of images including a local block with movement is selected for forming a composed image from the images. One or more corresponding blocks corresponding to the local block are determined in the images. Each of said one or more corresponding blocks is weighted with at least one of the following: similarity with respect to the local block, a distance from a location of the local block, saturation of the one or more corresponding blocks, and noise of the one or more corresponding blocks. The local block and at least one of the one or more corresponding blocks are combined, or the local block is replaced by at least one of the one or more corresponding blocks based on the weighting for forming the composed image from the images.
Techniques for deriving and/or leveraging application-centric model metric
Techniques for recommending a prediction model from among a number of different prediction models are provided. Each of these prediction models has been trained based on a respective training data set, and each performs in accordance with a respective theoretical performance manifold. An indication of a region definable in relation to the theoretical performance manifolds of the different prediction models is received as input. For each of the different prediction models, the indication of the region is linked to features parameterizing the respective performance manifold; and one or more portions of the respective performance manifold is/are identified based on the features determined by the linking, the portion(s) having a volume and a shape that collectively denote an expected performance of the respective model for the input. The expected performance of the prediction models for the input is compared. Based on the comparison, one or more of the models is/are suggested.
Handheld monitoring and early warning device for fusarium head blight of in-field wheat and early warning method thereof
A handheld monitoring and early warning device for Fusarium head blight of in-field wheat includes an acquisition card, a processor, a camera, a touchscreen, a power supply, and a 4G network card. The acquisition card is configured to acquire data. The processor is configured to analyze the acquired data, to obtain the growth of wheat based on a deep learning algorithm. The camera is configured to acquire root, stem, and ear information of in-field wheat. The touchscreen is a medium configured to perform human-computer interaction. The power supply is configured to supply power to the monitoring and early warning device. The 4G network card is configured to perform data communication and at the same time communicate with an external cloud server. Further disclosed is an early warning method of a handheld monitoring and early warning device for Fusarium head blight of in-field wheat.
Method and System for Identifying Objects
The present disclosure provides methods and/or systems for identifying an object. An example method includes: generating a plurality of synthesized images according to a three-dimensional digital model, the plurality of synthesized images having different view angles; respectively extracting eigenvectors of the plurality of synthesized images; generating a first fused vector by fusing the eigenvectors of the plurality of synthesized images; inputting the first fused vector into a classifier to train the classifier; acquiring a plurality of pictures of the object, the plurality of pictures respectively having same view angles as at least a portion of the plurality of synthesized images; respectively extracting eigenvectors of the plurality of pictures; generating a second fused vector by fusing the eigenvectors of the plurality of pictures; and inputting the second fused vector into the trained classifier to obtain a classification result of the object.
Base calling using three-dimentional (3D) convolution
We propose a neural network-implemented method for base calling analytes. The method includes accessing a sequence of per-cycle image patches for a series of sequencing cycles, where pixels in the image patches contain intensity data for associated analytes, and applying three-dimensional (3D) convolutions on the image patches on a sliding convolution window basis such that, in a convolution window, a 3D convolution filter convolves over a plurality of the image patches and produces at least one output feature. The method further includes beginning with output features produced by the 3D convolutions as starting input, applying further convolutions and producing final output features and processing the final output features through an output layer and producing base calls for one or more of the associated analytes to be base called at each of the sequencing cycles.
Method for detecting image of esophageal cancer using hyperspectral imaging
This application provides a method for detecting images of testing object using hyperspectral imaging. Firstly, obtaining a hyperspectral imaging information according to a reference image, hereby, obtaining corresponded hyperspectral image from an input image and obtaining corresponded feature values for operating Principal components analysis to simplify feature values. Then, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the esophageal cancer sample image, the image of the object under detected is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the image of the object under detected.
Three-dimensional object reconstruction method and apparatus
A three-dimensional object reconstruction method, applied to a terminal device or a server, is provided. The method includes obtaining a plurality of video frames of an object; determining three-dimensional location information of key points of the object in the plurality of video frames and physical meaning information of the key points, the physical meaning information indicating respective positions of the object; determining a correspondence between the key points having the same physical meaning information in the plurality of video frames; and generating a three-dimensional object according to the correspondence and the three-dimensional location information of the key points.
Network device classification apparatus and process
A network device classification process, including: monitoring network traffic of networked devices in a communications network to generate device behaviour data representing network traffic behaviours of the networked devices at different time granularities; processing the device behaviour data to classify a plurality of the networked devices as IoT devices, and others of the networked devices as non-IoT devices; accessing IoT device type data representing predetermined network traffic characteristics of respective known IoT device types; processing the device behaviour data of the IoT devices and the IoT device type data to classify each of the IoT devices as being a corresponding one of the plurality of known IoT device types; and for each of the IoT devices classified as a corresponding known IoT device type, classifying the IoT device as being in a corresponding operating state based on network traffic behaviours of the IoT device at different time granularities.