G06V10/76

SYSTEMS AND METHODS FOR ANALYSIS OF IMAGES OF APPAREL IN A CLOTHING SUBSCRIPTION PLATFORM
20230122261 · 2023-04-20 ·

Disclosed are methods, systems, and non-transitory computer-readable medium for color and pattern analysis of images including wearable items. For example, a method may include receiving an image depicting a wearable item, identifying the wearable item within the image by identifying a face of an individual wearing the wearable item or segmenting a foreground silhouette of the wearable item from background image portions of the image, determining a portion of the wearable item identified within the image as being a patch portion representative of the wearable item depicted within the image, deriving one or more patterns of the wearable item based on image analysis of the determined patch portion of the image, deriving one or more colors of the wearable item based on image analysis of the determined patch portion of the image, and transmitting information regarding the derived one or more colors and information regarding the derived one or more patterns.

GUIDED DOMAIN RANDOMIZATION VIA DIFFERENTIABLE DATASET RENDERING

In accordance with one embodiment of the present disclosure, a method includes receiving an input image having an object and a background, intrinsically decomposing the object and the background into an input image data having a set of features, augmenting the input image data with a 2.5D differentiable renderer for each feature of the set of features to create a set of augmented images, and compiling the input image and the set of augmented images into a training data set for training a downstream task network.

SYSTEM AND METHOD OF SPACE OBJECT TRACKING AND SURVEILLANCE NETWORK CONTROL

Various embodiments of the disclosed subject matter provide systems, methods, architectures, mechanisms, apparatus, computer implemented method and/or frameworks configured for tracking Earth orbiting objects and adapting SSN tracking operations to improve tracking accuracy while reducing computational complexity and resource consumption associated with such tracking.

HAND-RAISING DETECTION DEVICE, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND HAND-RAISING DETECTION METHOD

A hand-raising detection device includes a converter and a detection unit. The converter performs conversion of a predetermined space including a person into an overhead view image by using a result of three dimensional measurement performed on the predetermined space. The detection unit performs detection of a hand-raising action by using a silhouette image of the person in the overhead view image resulting from the conversion performed by the converter.

HANDHELD MONITORING AND EARLY WARNING DEVICE FOR FUSARIUM HEAD BLIGHT OF IN-FIELD WHEAT AND EARLY WARNING METHOD THEREOF

A handheld monitoring and early warning device for Fusarium head blight of in-field wheat includes an acquisition card, a processor, a camera, a touchscreen, a power supply, and a 4G network card. The acquisition card is configured to acquire data. The processor is configured to analyze the acquired data, to obtain the growth of wheat based on a deep learning algorithm. The camera is configured to acquire root, stem, and ear information of in-field wheat. The touchscreen is a medium configured to perform human-computer interaction. The power supply is configured to supply power to the monitoring and early warning device. The 4G network card is configured to perform data communication and at the same time communicate with an external cloud server. Further disclosed is an early warning method of a handheld monitoring and early warning device for Fusarium head blight of in-field wheat.

Systems and Methods for Data Representation in an Optical Measurement System

An illustrative method includes accessing, by a computing device, a model simulating light scattered by a simulated target, the model comprising a plurality of parameters. The method further includes generating, by the computing device, a set of possible histogram data using the model with a plurality of values for the parameters. The method further includes determining, by the computing device, a set of components that represent the set of possible histogram data, the set of components having a reduced dimensionality from the set of possible histogram data.

METHODS AND SYSTEMS FOR REAL-TIME DATA REDUCTION
20220207876 · 2022-06-30 ·

A computing system for decimating video data includes: a processor; a persistent storage system coupled to the processor; and memory storing instructions that, when executed by the processor, cause the processor to decimate a batch of frames of video data by: receiving the batch of frames of video data; mapping, by a feature extractor, the frames of the batch to corresponding feature vectors in a feature space, each of the feature vectors having a lower dimension than a corresponding one of the frames of the batch; selecting a set of dissimilar frames from the plurality of frames of video data based on dissimilarities between corresponding ones of the feature vectors; and storing the selected set of dissimilar frames in the persistent storage system, the size of the selected set of dissimilar frames being smaller than the number of frames in the batch of frames of video data.

METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR VEHICLE LOCALIZATION
20220164595 · 2022-05-26 ·

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for vehicle localization, which relates to the technical fields of autonomous driving, electronic map, deep learning, image processing, and the like. In the method, a computing device obtains an image descriptor map corresponding to a captured image of an external environment of a vehicle and a predicted pose of the vehicle when the captured image is captured; obtains a set of reference descriptors and a set of spatial coordinates corresponding to a set of keypoints of a reference image of the external environment; determines a plurality of sets of image descriptors corresponding to the set of spatial coordinates when the vehicle is in a plurality of candidate poses, respectively; determines a plurality of similarities between the plurality of sets of image descriptors and the set of reference descriptors; and updates the predicted pose based on the plurality of candidate poses and the plurality of similarities. Embodiments of the present disclosure can improve localization accuracy and robustness of the vehicle visual localization algorithm.

Shaped-based techniques for exploring design spaces

In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.

SCENE CHANGE DETECTION WITH NOVEL VIEW SYNTHESIS
20230267691 · 2023-08-24 ·

A method for detecting changes in a scene includes accessing a first set of images and corresponding pose data in a first coordinate system associated with a first user session of an augmented reality (AR) device and accessing a second set of images and corresponding pose data in a second coordinate system associated with a second user session. The method identifies the first set of images corresponding to a second image from the second set of images based on the pose data of the first set of images being determined spatially closest to the pose data of the second image after aligning the first coordinate system and the second coordinate system. A trained neural network generates a synthesized image from the first set of images. Features of the second image are subtracted from features of the synthesized image. Area of changes are identified based on the subtracted features.