Patent classifications
H04N13/271
3D imaging apparatus
A 3D imaging apparatus includes: a first image capturing camera generating a base image to be used for obtaining a first range image showing a three-dimensional character of an object; a second image capturing camera generating a reference image to be used for obtaining the first range image; a stereo matching unit searching for corresponding pixels between the base image and the reference image, and generating a first range image by calculating a disparity between the corresponding pixels; and a light source emitting to the object infrared light whose intensity is modulated. The first image capturing camera further generates a second range image by receiving a reflected light in synchronization with the modulated intensity. The reflected light is the infrared light reflected off the object. The second range image includes range information on a range between a point of reflection off the object and the first imaging unit.
Apparatus and method for providing media services with telepresence
A system that incorporates teachings of the present disclosure may include, for example, obtaining first images that are captured by a first camera system at a first location associated with a live presentation by the first user, transmitting first video content representative of the first images over a network for presentation by a group of other processors that are each at one of a group of other locations associated with corresponding other users, receiving second video content representative of second images that are associated with each of the other users, and presenting the second video content in a telepresence configuration that simulates each of the other users being present in an audience at the first location. Other embodiments are disclosed.
METHOD AND CAMERA MODULE FOR ACQUIRING DEPTH INFORMATION
Disclosed according to an embodiment is a method for controlling emitted light in a camera module which can acquire depth information. More particularly, disclosed is a camera module for controlling delay time of light emitted from each of light sources to determine the direction of light emitted from the light sources. A camera module according to an embodiment may control delay time of light emitted from each of light sources so that the camera module can be operated with higher performance even from a long distance.
METHOD AND CAMERA MODULE FOR ACQUIRING DEPTH INFORMATION
Disclosed according to an embodiment is a method for controlling emitted light in a camera module which can acquire depth information. More particularly, disclosed is a camera module for controlling delay time of light emitted from each of light sources to determine the direction of light emitted from the light sources. A camera module according to an embodiment may control delay time of light emitted from each of light sources so that the camera module can be operated with higher performance even from a long distance.
Focused image generation single depth information from multiple images from multiple sensors
An image processing device including an image sensor array, an image pre-processing unit, a depth information generator, and a focusing unit is provided. The image sensor array takes multiple images of a first object and a second object. The image pre-processing unit processes the images to generate two shift images associated with the two objects. The depth information generator generates depth information according to the two shift images. The depth information includes distance information associated with the first object. The focusing unit generates a pair of focused images that have the first object focused thereon according to the depth information and the two shift images.
Method and apparatus for providing eye-contact function to multiple points of attendance using stereo image in video conference system
The present invention relates to a new eye-contact function providing method which provides a natural eye-contact function to attendances by using a stereo image and a depth image to estimate a precise depth value of the occlusion region and improve a quality of a composite eye-contact image when there are two or more remote attendances in one site at the time of a video conference using a video conference system and an apparatus therefor.
INPUT/OUTPUT DEVICE, INPUT/OUTPUT PROGRAM, AND INPUT/OUTPUT METHOD
An object of the present invention is to provide an I/O device, an I/O program, and an I/O method which can be used even by a user who has strabismus, etc. Another object of the present invention is to provide an I/O device, an I/O program, and an I/O method for adjusting the strabismus or eyesight, etc. of a user. In addition, a display device can generate a stereoscopic image, a depth level sensor measures a distance to an object, and a control unit performs display on the display device in accordance with the depth level sensor. A display adjustment mechanism adjusts the angle of the display device.
THREE-DIMENSIONAL DEPTH PERCEPTION APPARATUS AND METHOD
A three-dimensional depth perception apparatus and method, comprising a synchronized trigger module, an MIPI receiving/transmitting module, and a multiplexing core computing module, a storage controller module, a memory, and an MUX selecting module; wherein the synchronized trigger module is for generating a synchronized trigger signal that is transmitted to an image acquiring module; the MIPI receiving/transmitting module is for supporting input/output of the MIPI video streams and other formats of video streams; the multiplexing core computing module is for selecting a monocular structured light depth perception working mode or a binocular structured light depth perception working mode as needed, including a pre-processing module, a block matching disparity computing module, a depth computing module, and a depth post-processing module. The apparatus flexibly adopts a monocular or binocular structured-light depth sensing manner as required by the user, so as to conveniently leverage the advantages of different modes: the MIPI in, MIPI out working manner is nearly transparent to the user, so as to facilitate the user to employ the apparatus to replace the MIPI camera in the original system, directly obtaining the depth graph.
SYNTHESIS OF TRANSFORMED IMAGE VIEWS
Techniques are provided for synthesis of transformed image views, based on a reference image, using depth information. The transformed image views may simulate a change in position or focal length of a camera that produced the reference image. An example system includes an image transformation circuit configured to transform the reference image corresponding to a first viewpoint, to a transformed image corresponding to a second viewpoint. The system also includes an inverse warping circuit configured to calculate a mapping from the pixels of the transformed image to corresponding pixels of the reference image. The system further includes a hole detection circuit configured to detect holes in the transformed image based on depth discontinuities between the reference and transformed images; and a hole filling circuit configured to in-fill the detected holes using a sampling of selected neighboring pixels from the reference image, to synthesize a view based on the transformed image.
METHOD AND SYSTEM FOR RENDERING DOCUMENTS WITH DEPTH CAMERA FOR TELEPRESENCE
A method of sharing documents is provided. The method includes capturing first image data associated with a document, detecting content of the document based on the captured first image data, capturing second image data associated with an object controlled by a user moved relative to the document, determining a relative position between the document and the object, combining a portion of the second image data with the first image data based on the determined relative position to generate a combined image signal that is displayed, and emphasizing a portion of the content in the displayed combined image signal, based on the relative position.