Patent classifications
G06V10/757
SYSTEM AND METHOD FOR ANIMAL DETECTION
A system and a method for detecting animals in a region of interest are disclosed. An image that captures a scene in the region of interest is received. The image is fed to an animal detection model to produce a group of probability maps for a group of key points and a group of affinity field maps for a group of key point sets. One or more connection graphs are determined based on the group of probability maps and the group of affinity field maps. Each connection graph outlines a presence of an animal in the image. One or more animals present in the region of interest are detected based on the one or more connection graphs.
DETECTION SYSTEM, PROCESSING APPARATUS, MOVEMENT OBJECT, DETECTION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A detection system includes an acquisition portion scanning light to acquire point-cloud information corresponding to a plurality of positions of a detection target object; an estimation portion using consistency with an outer shape model of the detection target object to estimate a location and attitude of the detection target object based on the point-cloud information; and an output portion outputting information relating to a movement target location based on an estimation result, wherein the estimation portion fits an outer shape model indicating an outer shape of the detection target object to a point cloud according to the point-cloud information, and uses point-cloud information existing outside the outer shape model to estimate the location and the attitude of the detection target object.
ZOOM BASED ON GESTURE DETECTION
A method performs zooming based on gesture detection. A visual stream is presented using a first zoom configuration for a zoom state. An attention gesture is detected from a set of first images from the visual stream. The zoom state is adjusted from the first zoom configuration to a second zoom configuration to zoom in on a person in response to detecting the attention gesture. The visual stream is presented using the second zoom configuration after adjusting the zoom state to the second zoom configuration. Whether the person is speaking is determined, from a set of second images from the visual stream. The zoom state is adjusted to the first zoom configuration to zoom out from the person in response to determining that the person is not speaking. The visual stream is presented using the first zoom configuration after adjusting the zoom state to the first zoom configuration.
IMAGE FEATURE MATCHING METHOD AND RELATED APPARATUS, DEVICE AND STORAGE MEDIUM
In an image feature matching method, at least two images to be matched are acquired; a feature representation of each image to be matched is obtained by performing feature extraction on the image to be matched, wherein the feature representation comprises a plurality of first local features; transforming the first local features into first transformation features having a global receptive field of the images to be matched; and a first matching result of the at least two images to be matched is obtained by matching first transformation features in the at least two images to be matched.
OBJECT DETECTION DEVICE
An object detection device includes an irradiation unit, a light reception unit and a detection unit. The light reception unit is configured to receive reflected light of light radiated by the irradiation unit and environment light. The detection unit is configured to detect a predetermined object based on a point group that is information based on the reflected light and at least one image. The point group is a group of reflection points detected in the whole distance measurement area. The at least one image includes an environment light image that is an image based on the environment light, a distance image that is an image based on a distance to an object detected on a basis of the reflected light and/or a reflection intensity image that is an image based on reflection intensity of the reflected light.
WEATHER STATION LOCATION SELECTION USING ITERATION WITH FRACTALS
A method, computer system, and a computer program product for weather station placement design are provided. Weather data measured at weather stations, current location data regarding current locations of the respective weather stations, and weather forecast data generated by a weather forecast model are received. Forecast performance by the weather forecast model is determined by comparing the weather data to the weather forecast data and so that first weather stations where the weather forecast model had best forecast performance are identified. A weather forecast performance map is generated based on the identified first weather stations. Fractals are generated. The fractals are iteratively matched to the weather forecast performance map to identify a first fractal that most closely matches a layout of the current locations of the first weather stations. A first fractal map that includes the first fractal overlaid on the weather forecast performance map is presented.
METHODS, SYSTEMS, ARTICLES OF MANUFACTURE, AND APPARATUS TO EXTRACT SHAPE FEATURES BASED ON A STRUCTURAL ANGLE TEMPLATE
Methods, systems, articles of manufacture, and apparatus to extract shape features based on a structural angle template are disclosed. An example apparatus includes a template generator to generate a template based on an input image and calculate a template value based on values in the template; a bit slicer to calculate an OR bit slice and an AND bit slice based on the input image, combine the OR bit slice with the AND bit slice to generate a fused image, group a plurality of pixels of the fused image to generate a pixel window, each pixel of the pixel window including a pixel value, and calculate a window value based on the pixel values of the pixel window; and a comparator to compare the template value with the window value and store the pixel window in response to determining the window value satisfies a similarity threshold with the template value.
Anti-counterfeiting image code embedded in a decorative pattern of a ceramic tile and anti-counterfeiting method thereof
The present disclosure relates to an anti-counterfeiting image code embedded in a decorative pattern of a ceramic tile and an anti-counterfeiting method thereof, the anti-counterfeiting image code is input into a terminal recognition software application. The anti-counterfeiting method includes steps of: (1) embedding an image code into the decorative pattern of the ceramic tile; (2) inputting the decorative pattern on a surface of the ceramic tile into an image code generating software to generate the image code that can be decoded, editing a ceramic tile parameter and ceramic tile information in the image code generating software; (3) packing the image code and inputting it into a terminal recognition software application; (4) downloading the terminal recognition software application at a mobile terminal; and (5) opening an application to initiate a code scanner and capturing a image or a pre-taught partial feature image.
Method for processing images, electronic device, and storage medium
A method for processing images includes: detecting a plurality of human face key points of a three-dimensional human face in a target image; acquiring a virtual makeup image, wherein the virtual makeup image includes a plurality of reference key points, the reference key points indicating human face key points of a two-dimensional human face; and acquiring a target image fused with the virtual makeup image by fusing the virtual makeup image and the target image with each of the reference key points in the virtual makeup image aligned with a corresponding human face key point.
METHOD FOR DEPTH ESTIMATION FOR A VARIABLE FOCUS CAMERA
The disclosure relates to a method including: capturing a sequence of images of a scene with a camera at different focus positions according to a predetermined focus schedule that specifies a chronological sequence of focus positions of the camera, extracting image features of captured images, after having extracted and stored image features from said captured images, processing a captured image whose image features have not yet been extracted, said processing comprising extracting image features from the currently processed image and storing the extracted image features, said processing further comprising aligning image features stored from the previously captured images with the image features of the currently processed image, and generating a multi-dimensional tensor representing the image features of the processed images aligned to the image features of the currently processed image, and generating a two-dimensional depth map using the focus positions in the predetermined focus schedule and the generated multi-dimensional tensor.