H04N13/271

DEVICE AND METHOD FOR THREE-DIMENSIONAL VIDEO COMMUNICATION
20170295357 · 2017-10-12 · ·

A three-dimensional (3D) communication device includes a 3D stereoscopic camera for capturing video and photos in 3D. The device also includes a 3D display, which enables the display of video and photos in 3D that have been captured by a local user's communication device or that have been received from a remote 3D communication device. The 3D communication device may be configured as a handheld standalone device or may alternatively be configured as a modular device, a case, or a dock that can be interfaced with a portable computing device, such as a smart phone, which lacks the capability to capture and view 3D video/photo content. As such, the 3D communication device enables 3D chat or communication between a local user and one or more remote users of the 3D communication device or any other computing device that is configured with suitable communication or chat software.

IMAGE GENERATING APPARATUS AND METHOD FOR GENERATION OF 3D PANORAMA IMAGE
20170293998 · 2017-10-12 ·

Disclosed are an apparatus and a method for generating an image for generation of a 3D panorama image. A method for generating a 3D panorama image in an image generating apparatus comprises the steps of: receiving an input of a plurality of 2D images and a plurality of depth maps corresponding to the plurality of 2D images; setting a left-eye image area and a right-eye image area for each of the plurality of 2D images on the basis of the plurality of depth maps; and synthesizing images of each of the left-eye image areas that are set in the plurality of 2D images to thereby generate a left-eye panorama image and synthesizing images of each of the right-eye image areas that are set in the plurality of 2D images to thereby generate a right-eye panorama image. Accordingly, the image generating apparatus can generate a 3D panorama image without distortion on the basis of a plurality of 2D images.

Optimized exposure control for improved depth mapping

Disclosed herein are optimized techniques for controlling the exposure time or illumination intensity of a depth sensor. Invalid-depth pixels are identified within a first depth map of an environment. For each invalid-depth pixel, a corresponding image pixel is identified in a depth image that was used to generate the first depth map. Multiple brightness intensities are identified from the depth image. Each brightness intensity is categorized as corresponding to either an overexposed or underexposed image pixel. An increased exposure time or illumination intensity or, alternatively, a decreased exposure time or illumination intensity is then used to capture another depth image of the environment. After a second depth map is generated based on the new depth image, portion(s) of the second depth map are selectively merged with the first depth map by replacing the invalid-depth pixels of the first depth map with corresponding valid-depth pixels of the second depth map.

Time-of-flight camera and proximity detector

Time of Flight (ToF) image processing systems and methods for proximity detection are disclosed. In particular, use of a separate proximity detector can be eliminated by using the time of flight image processing system as disclosed herein. In particular, the time of flight image processing system has two modes: a low resolution proximity detection mode and a high resolution imaging mode.

HEAD-MOUNTED DISPLAY DEVICE AND COMPUTER PROGRAM
20170295360 · 2017-10-12 · ·

A head-mounted display device includes a deriving section that derives an object distance of a real object included in an outside scene to be imaged and a relative pose of the real object with respect to an imaging section, a parameter selection section that selects one parameter group among a plurality of parameter groups for displaying an AR image associated with the real object on an image display section in accordance with the object distance; and a display image setting section that sets a display image in which a pose of the AR image and a pose of the real object are associated with each other, on the basis of the object distance, the relative pose of the real object, and the selected one parameter group, and displays the display image on the image display section.

Sensor misalignment compensation

Camera compensation methods and systems that compensate for misalignment of sensors/camera in stereoscopic camera systems. The compensation includes identifying a pitch angle offset between a first camera and a second camera, determining misalignment of the first and second cameras from the identified pitch angle offset, determining a relative compensation delay responsive to the determined misalignment, introducing the relative compensation delay to image streams produced by the cameras, and producing a stereoscopic image on a display from the first and second image streams with the introduced delay.

Figure ground organization of 3-D scenes

Systems and methods for processing a pair of 2-D images are described. In one example, a stereoscopic set of images is converted into a collection of regions that represent individual 3-D objects in the pair of images. In one embodiment, the system recovers the 3-D point P for each point p that appears in both images. It estimates the 3-D orientation of the floor plane, and the image capture planes and their height from the floor. The system then identifies the collection B of points P that do not represent points on the floor and generates a projection C of B onto a plane parallel to the floor. It blurs the projection C and identifies peaks in the blurred image, then fits symmetric figures to the points in C around the identified peaks. The system projects the 3-D figures associated with the symmetric figures back onto the 2-D images.

Image capture device

Read electrodes are provided to drain signal charge of pixels from photoelectric conversion units provided in the pixels separately to a vertical transfer unit. During a first exposure period during which an object is illuminated with infrared light, signal charge obtained from a first pixel, and signal charge obtained from a second pixel adjacent to the first pixel, are added together in the vertical transfer unit to produce first signal charge. During a second exposure period during which the object is not illuminated with infrared light, signal charge obtained from the first pixel, and signal charge obtained from the second pixel adjacent to the first pixel, are transferred without being added to the first signal charge in the vertical transfer unit, and are added together in another packet to produce second signal charge.

Input parameter based image waves
11671572 · 2023-06-06 · ·

A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.

Input parameter based image waves
11671572 · 2023-06-06 · ·

A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.