Patent classifications
G06V10/803
Method and systems for anatomy/view classification in x-ray imaging
Various methods and systems are provided for x-ray imaging. In one embodiment, a method for an image pasting examination comprises acquiring, via an optical camera and/or depth camera, image data of a subject, controlling an x-ray source and an x-ray detector according to the image data to acquire a plurality of x-ray images of the subject, and stitching the plurality of x-ray images into a single x-ray image. In this way, optimal exposure techniques may be used for individual acquisitions in an image pasting examination such that the optimal dose is utilized, stitching quality is improved, and registration failures are avoided.
SYSTEMS AND METHODS FOR UNIFIED VISION-LANGUAGE UNDERSTANDING AND GENERATION
Embodiments described herein provide bootstrapping language-images pretraining for unified vision-language understanding and generation (BLIP), a unified VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP enables a wider range of downstream tasks, improving on both shortcomings of existing models.
Dynamic imaging system
A dynamic imaging system is disclosed. The dynamic imaging system may comprise one or more imager, one or more input device, a controller, and/or a display. Each imager may be operable to capture a video stream having a field of view. In some embodiments, the controller may articulate the imager or crop the field of view to change the field of view in response to signals from the one or more input devices. For example, the signal may relate to the vehicle's speed. In other embodiments, the controller may apply a warp to the field of view. The warp may be applied in response to signals from the one or more input devices. In yet other embodiments, video streams from one or more imagers may be stitched together by the controller. Further, the controller may likewise move the stitch line in response to signals from the or more input devices.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
The present technology relates to an information processing apparatus, an information processing method, and a program capable of obtaining a distance to an object more accurately.
An extraction unit extracts, on the basis of an object recognised in an imaged image obtained by a camera, sensor data corresponding to an object region including an object in the imaged image among sensor data obtained by a rangefinding sensor. The present technology can be applied to an evaluation apparatus for distance information, for example.
Electronic device and method for controlling camera using external electronic device
An electronic device and method are provided. The electronic device includes a camera, a communication circuit, and a processor configured to be operably coupled to the camera and the communication circuit. The processor is further configured to receive first image data from the camera by controlling the camera based on a first parameter, transmit the first image data to an external electronic device by using the communication circuit in response to acquisition of the first image data, identify a second parameter for controlling the camera at least based on the external electronic device having received the first image data, and acquire second image data by controlling the camera based on the second parameter in response to the identification of the second parameter.
Mobile multi-camera multi-view capture
A background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.
Visual, depth and micro-vibration data extraction using a unified imaging device
A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.
FINGERPRINT ANTI-COUNTERFEITING METHOD AND ELECTRONIC DEVICE
A fingerprint anti-counterfeiting method and an electronic device are provided. The fingerprint anti-counterfeiting method includes: After detecting a fingerprint input action of a user, an electronic device obtains a fingerprint image generated by the fingerprint input action, and obtains a vibration-sound signal generated by the fingerprint input action. The device determines, based on a fingerprint anti-counterfeiting model, whether the fingerprint input action is performed by a true finger. The fingerprint anti-counterfeiting model is a multi-dimensional network model obtained through learning based on fingerprint images for training and corresponding vibration-sound signals. The fingerprint anti-counterfeiting method in embodiments of this application helps improve a protection capability of the electronic device for a fake fingerprint attack.
FLEXIBLE MULTI-CHANNEL FUSION PERCEPTION
A method may include obtaining first sensor data from a first sensor system and second sensor data from a second sensor system. The first and the second sensor systems may capture sensor data from a total measurable world. The method may include identifying a first object included in the first sensor data and a second object included in the second sensor data and determining first parameters corresponding to the first object and second parameters corresponding to the second object. The first parameters may be compared with the second parameters and whether the first object and the second object are a same object may be determined based on the comparing the first parameters and the second parameters. Responsive to determining that the first object and the second object are the same object, a set of objects representative of objects in the total measurable world including the same object may be generated.
DETECTION METHODS TO DETECT OBJECTS SUCH AS BICYCLISTS
To reliably detect an object such as a bicycle at increased range, an Advanced Driving Support System uses a deep neural network(s) to process an ambient (grey-scale) image into an object that is then tracked by a second range detection camera. Most objects of interest, such as bicycles and automobiles, are outfitted with one or more retroreflectors that are used to cue the neural network to the object of most interest. As the retroreflectors also tend to saturate the range detection camera, a method is used to manage the saturation and estimate the correct range to the object.