G06V10/758

IMAGE ACQUISITION APPARATUS AND ELECTRONIC APPARATUS INCLUDING THE SAME

An image acquisition apparatus includes: a multispectral image sensor configured to acquire images in at least four channels based on a second wavelength band of about 10 nm to about 1,000 nm; and a processor configured to estimate illumination information of the images by inputting the images of at least four channels to a deep learning network trained in advance, and convert colors of the acquired images using the estimated illumination information.

System, method, and computer program for capturing an image with correct skin tone exposure

A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces are detected having threshold skin tone within a scene. Next, based on the detected one or more faces, the scene is segmented into one or more face regions and one or more non-face regions. A model of the one or more faces is constructed based on a depth map and a texture map, the depth map including spatial data of the one or more faces, and the texture map includes surface characteristics of the one or more faces. The one or more images of the scene are captured based on the model. Further, in response to the capture, the one or more face regions are processed to generate a final image.

IMAGE FORGERY DETECTION VIA PIXEL-METADATA CONSISTENCY ANALYSIS

Systems and/or techniques for facilitating image forgery detection via pixel-metadata consistency analysis are provided. In various embodiments, a system can receive an electronic image from a client device. In various cases, the system can obtain a pixel vector and/or an image metadata vector that correspond to the electronic image. In various aspects, the system can determine whether the electronic image is authentic or forged, based on analyzing the pixel vector and the image metadata vector via at least one machine learning model.

Method and system for gait detection of a person

A method of detecting gaits of an individual with a sensor worn by the individual. The sensor includes an accelerometer and a processing unit. The method includes obtaining an signal representing one or more sensor acceleration values; sampling the signal to obtain a sampled signal; segmenting the sampled signal into windows to obtain a segmented acceleration signal; extracting a feature set from the segmented acceleration signal; determining a probability value, for a respective window, n, where n is a positive integer greater than zero, the probability value giving an estimated probability value of gait occurrence for the individual during the respective window; modifying the estimated probability value by using a histogram of previously detected gait durations to obtain a modified probability value; and determining, based on the modified probability value, and by using a determination threshold whether or not the respective window represents gait occurrence.

ANATOMICAL ENCRYPTION OF PATIENT IMAGES FOR ARTIFICIAL INTELLIGENCE
20220415004 · 2022-12-29 ·

An apparatus (10) for generating a training set of anonymized images (40) for training an artificial intelligence (AI) component (42) from images (11) of a plurality of persons. The apparatus includes at least one electronic processor (20) programmed to: spatially map the images of the plurality of persons to a reference image (30) to generate images (32) in a common reference frame; partition the images in the common reference frame into P spatial regions (34) to generate P sets of image patches (36) corresponding to the P spatial regions; assemble a set of training images (3) in the common reference frame by, for each training image in the common reference frame, selecting an image patch from each of the P sets of image patches and assembling the selected image patches into the training image in the common reference frame; and process the training images in the common reference frame to generate the training set of anonymized images including applying statistical inverse spatial mappings to the training images in the common reference frame, wherein the statistical inverse spatial mappings are derived from spatial mappings (33) of the images of the plurality of persons to the reference image.

Medical Image Registration Method Based on Progressive Images

A two-stage medical image registration method based on progressive images (PIs) to solve the technical problem of low registration accuracy of traditional image registration methods includes: merging a reference image with a floating image to generate multiple intermediate PIs; registering, by a speeded-up robust features (SURF) algorithm and an affine transformation, the floating image with the intermediate PIs to acquire coarse registration results; registering, by the SURF algorithm and the affine transformation, the reference image with the coarse registration results to acquire fine registration results; and comparing the fine registration results of the intermediate PIs, which are acquired by iteration, and selecting an optimal registration result as a final registration image. The method can achieve multimodal registration for brain imaging with MI, NCC, MSD, and NMI superior to those of the existing registration algorithms. The method effectively improves the registration accuracy through the progressive medical image registration strategy.

Fairing skin repair method based on measured wing data

A fairing skin repair method based on measured wing data includes fairing skin registration. Data set P1 through denoising and filtering wing point cloud data is reorganized to obtain a key point set P. A histogram feature descriptor in a normal direction of any key point in set P and a skin point cloud data Q is calculated. Euclidean distance between feature descriptors of two points is calculated through K-nearest neighbor algorithm, and points with high similarity are added into a set M. A clustering is performed on set M using a Hough voting algorithm to obtain a local point cloud set P′ in set P. The method includes fairing skin repair. The boundary line of the point frame is projected onto Q, and a distance between a projection line on the point cloud and the boundary line is calculated to obtain an amount of skin to be repaired.

SYSTEM AND METHOD FOR SUPER-RESOLUTION IMAGE PROCESSING IN REMOTE SENSING

A system and a method for super-resolution image processing in remote sensing are disclosed. One or more sets of multi-temporal images with an input resolution and one or more first target images with a first output resolution are generated from one or more data sources. The first output resolution is higher than the input resolution. Each set of multi-temporal images is processed to improve an image match in the corresponding set of multi-temporal images. The one or more sets of multi-temporal images are associated with the one or more first target images to generate a training dataset. A deep learning model is trained using the training dataset. The deep learning model is provided for subsequent super-resolution image processing.

Method for determining a histogram of variable sample rate waveforms

A computer-implemented method comprises receiving a plurality of sampled data points, each data point including a y value and a t value; defining a plurality of bins; defining an array of elements; dividing the sampled data points into a plurality of sections; assigning a plurality of polynomial equations, one polynomial equation to each section, each polynomial equation having a waveform that fits the data points of the associated section; determining a plurality of section bin times, one section bin time for each bin in each section, each section bin time determined using the polynomial equation and indicating an amount of time that the waveform has values in the range of one of the bins; and adding the section bin time for each bin in each section to the histogram data in the array element pointed to by the number of the bin.

Using captured video data to identify pose of a vehicle

A system uses video of a vehicle to detect and classify the vehicle's pose. The system generates an image stack by scaling and shifting a set of digital image frames from the video to a fixed scale, yielding a sequence of images over a time period. The system processes the image stack with a classifier to determine the pose of the object. The system also may determine state and class of visible turn signals on the object, as well as predict the vehicle's direction of travel.