Patent classifications
H04N13/271
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing apparatus (IP1) includes a depth information extraction unit (DIE1) and a processing unit (IMP). The depth information extraction unit (DIE1) can extract depth information from a plurality of pieces of infrared image information included in a plurality of pieces of image data. The plurality of pieces of image data is image data captured from a plurality of viewpoints. Each of the plurality of pieces of image data includes visible light image information and infrared image information. Based on the depth information, the processing unit (IMP) processes the visible light image generated using the visible light image information included in at least one piece of image data among the plurality of pieces of image data.
DETECTION AND RANGING BASED ON A SINGLE MONOSCOPIC FRAME
One or more stereoscopic images are generated based on a single monoscopic image that may be obtained from a camera sensor. Each stereoscopic image includes a first digital image and a second digital image that, when viewed using any suitable stereoscopic viewing technique, result in a user or software program receiving a three-dimensional effect with respect to the elements included in the stereoscopic images. The monoscopic image may depict a geographic setting of a particular geographic location and the resulting stereoscopic image may provide a three-dimensional (3D) rendering of the geographic setting Use of the stereoscopic image helps a system obtain more accurate detection and ranging capabilities. The stereoscopic image may be any configuration of the first digital image (monoscopic) and the second digital image (monoscopic) that together may generate a 3D effect as perceived by a viewer or software program.
DETECTION AND RANGING BASED ON A SINGLE MONOSCOPIC FRAME
One or more stereoscopic images are generated based on a single monoscopic image that may be obtained from a camera sensor. Each stereoscopic image includes a first digital image and a second digital image that, when viewed using any suitable stereoscopic viewing technique, result in a user or software program receiving a three-dimensional effect with respect to the elements included in the stereoscopic images. The monoscopic image may depict a geographic setting of a particular geographic location and the resulting stereoscopic image may provide a three-dimensional (3D) rendering of the geographic setting Use of the stereoscopic image helps a system obtain more accurate detection and ranging capabilities. The stereoscopic image may be any configuration of the first digital image (monoscopic) and the second digital image (monoscopic) that together may generate a 3D effect as perceived by a viewer or software program.
Pointer projection for natural user input
A method to identify a targeted object based on eye tracking and gesture recognition. The method is enacted in a compute system controlled by a user and operatively coupled to a machine vision system. In this method, the compute system receives, from the machine vision system, video imaging a head and pointer of the user. Based on the video, the compute system computes a geometric line of sight of the user, which is partly occluded by the pointer. Then, with reference to position data for one or more objects, the compute system identifies the targeted object, situated along the geometric line of sight.
SOLID-STATE IMAGING DEVICE AND ELECTRONIC CAMERA
A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.
SOLID-STATE IMAGING DEVICE AND ELECTRONIC CAMERA
A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.
Volume Estimation Device and Work Machine Using Same
The invention improves volume estimation accuracy of an object in a container using a captured image in a case where a blind spot region exists in a captured image of the object in the container. A blind spot estimation portion for estimating the blind spot region of the object in the bucket; a blind spot region shape estimation portion for estimating a shape of the object in the blind spot region; and a volume estimation portion for estimating a volume of the object in the blind spot region are included, the blind spot estimation portion estimates the blind spot region by mesh disparity data obtained from a captured image of the object in the bucket imaged by a plurality of cameras, the blind spot region shape estimation portion estimates the shape of the object in the blind spot region by the mesh disparity data, and the volume estimation portion estimates the volume of the object in the blind spot region based on the shape of the object in the blind spot region estimated by the blind spot region shape estimation portion and a shape of a bottom of the bucket.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing apparatus, an information processing method, and a program capable of projecting an image to an appropriate projection destination while reducing work of a user in projection of an image, the information processing apparatus including: a 3-dimensional information acquisition unit configured to acquire 3-dimensional information indicating disposition of an object; and a projection control unit configured to decide a projection region to which an image is projected in a space in which the object is disposed on the basis of the 3-dimensional information acquired by the 3-dimensional information acquisition unit, an information processing method, and a program.
SMART GLASSES, AND SYSTEM AND METHOD FOR PROCESSING HAND GESTURE COMMAND THEREFOR
Smart glasses, and a system and method for processing a hand gesture command using the smart glasses. According to an exemplary embodiment, the system includes smart glasses to capture a series of images including a hand gesture of a user and represent and transmit a hand image, included in each of the series of images, as hand representation data that is represented in a predetermined format of metadata; and a gesture recognition apparatus to recognize the hand gesture of a user by using the hand representation data of the series of images received from the smart glasses, and generate and transmit a gesture command corresponding to the recognized hand gesture.
Methods for automatic registration of 3D image data
A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.