H04N13/271

METHODS, SYSTEMS, AND MEDIA FOR GENERATING AN IMMERSIVE LIGHT FIELD VIDEO WITH A LAYERED MESH REPRESENTATION

Mechanisms for generating compressed images are provided. More particularly, methods, systems, and media for capturing, reconstructing, compressing, and rendering view-dependent immersive light field video with a layered mesh representation are provided.

LASER-ENHANCED VISUAL SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) FOR MOBILE DEVICES
20170374342 · 2017-12-28 ·

Laser-enhanced visual simultaneous localization and mapping (SLAM) is disclosed. A laser line is generated, the laser line being incident on an object and/or environment. While the laser line is incident on the object, one or more images of the object with the laser line incident on the object are captured. The camera is localized based on one or more characteristics of the laser line incident on the object. In some examples, improved feature localization provided by the laser line provides more accurate camera localization, which, in turn, improves the accuracy of the stitched mesh of the object/environment. As such, the examples of the disclosure provide for improved camera localization and improved three-dimensional mapping.

ELECTRONIC DEVICE AND CONTROL METHOD THEREOF
20230206551 · 2023-06-29 ·

An electronic device and a control method thereof are provided. The electronic device includes a camera, a camera flash, and at least one processor configured to control the camera to capture a natural light image and a depth image of an object, control the camera and the camera flash to capture an artificial light image of the object, obtain distance information from the depth image to generate a depth mask image, create a cluster mask image from the natural light image, obtain a flash image in which the illuminance of the natural light image has been removed from the illuminance of the artificial light image, obtain an optimization parameter based on the distance information, the depth mask image, the cluster mask image, and the flash image, and obtain three-dimensional topographic information and surface reflection information about the object based on the obtained optimization parameter.

AUTO KEYSTONE CORRECTION AND AUTO FOCUS ADJUSTMENT
20170374331 · 2017-12-28 ·

An apparatus and method for performing automatic keystone correction and automatic focus correction in a system. In one embodiment, the method comprises analyzing an image projected on a projection surface by a projector of a device and captured by one or more cameras of the device to determine whether shape of the image indicates keystone correction is needed and adjusting display output of the projector to cause the display output to be rectangular on the projection surface.

ARTIFICIAL PANORAMA IMAGE PRODUCTION AND IN-PAINTING FOR OCCLUDED AREAS IN IMAGES
20230209035 · 2023-06-29 ·

A system includes a three-dimensional (3D) scanner, a camera with a viewpoint that is different from a viewpoint of the 3D scanner, and one or more processors coupled with the 3D scanner and the camera. The processors access a point cloud from the 3D scanner and one or more images from the camera, the point cloud comprises a plurality of 3D scan-points, a 3D scan-point represents a distance of a point in a surrounding environment from the 3D scanner, and an image comprises a plurality of pixels, a pixel represents a color of a point in the surrounding environment. The processors generate, using the point cloud and the one or more images, an artificial image that represents a portion of the surrounding environment viewed from an arbitrary position in an arbitrary direction, wherein generating the artificial image comprises colorizing each pixel in the artificial image.

ARTIFICIAL PANORAMA IMAGE PRODUCTION AND IN-PAINTING FOR OCCLUDED AREAS IN IMAGES
20230209035 · 2023-06-29 ·

A system includes a three-dimensional (3D) scanner, a camera with a viewpoint that is different from a viewpoint of the 3D scanner, and one or more processors coupled with the 3D scanner and the camera. The processors access a point cloud from the 3D scanner and one or more images from the camera, the point cloud comprises a plurality of 3D scan-points, a 3D scan-point represents a distance of a point in a surrounding environment from the 3D scanner, and an image comprises a plurality of pixels, a pixel represents a color of a point in the surrounding environment. The processors generate, using the point cloud and the one or more images, an artificial image that represents a portion of the surrounding environment viewed from an arbitrary position in an arbitrary direction, wherein generating the artificial image comprises colorizing each pixel in the artificial image.

CONTROLLABLE LASER PATTERN FOR EYE SAFETY AND REDUCED POWER CONSUMPTION FOR IMAGE CAPTURE DEVICES

A system and method are described herein for controlling optical power provided to one or more laser projectors of an image capture device, such as a stereo camera, for improving depth image acquisition and quality while adhering to eye safety standards based on laser emissions. The system includes one or more laser projectors operable to emit a laser dot pattern onto a scene, and image capture devices operable to capture images from the scene including the laser dot pattern. The images are analyzed to acquire depth information for objects in the scene, and the depth information is used to modulate the optical power of the laser projectors based on the distance to the object relative to the image capture device.

PRESENTATION OF SCENES FOR BINOCULAR RIVALRY PERCEPTION
20170372517 · 2017-12-28 ·

Embodiments herein relate to the display of enhanced stereographic imagery in augmented or virtual reality. In various embodiments, an apparatus to display enhanced stereographic imagery may include one or more processors, an image generation module to generate an enhanced stereoscopic image of a scene having a first two-dimensional (2D) image of the scene and a second 2D image of the same scene that is visually or optically different than the first 2D image to create binocular rivalry perception of the scene when the first and second 2D images are respectively presented to a first and a second eye of a user, and a display module to display the enhanced stereoscopic image to the user, with the first 2D image presented to the first eye of the user and the second 2D image presented to the second eye of the user. Other embodiments may be described and/or claimed.

SYSTEMS AND METHODS FOR SCANNING THREE-DIMENSIONAL OBJECTS

A method for computing a three-dimensional (3D) model of an object includes: receiving, by a processor, a first chunk including a 3D model of a first portion of the object, the first chunk being generated from a plurality of depth images of the first portion of the object; receiving, by the processor, a second chunk including a 3D model of a second portion the object, the second chunk being generated from a plurality of depth images of the second portion of the object; computing, by the processor, a registration of the first chunk with the second chunk, the registration corresponding to a transformation aligning corresponding portions of the first and second chunks; aligning, by the processor, the first chunk with the second chunk in accordance with the registration; and outputting, by the processor, a 3D model corresponding to the first chunk merged with the second chunk.

DEPTH IMAGE PROVISION APPARATUS AND METHOD
20170374352 · 2017-12-28 ·

Apparatuses, methods and storage media for providing a depth image of an object are described. In some embodiments, the apparatus may include a projector to perform a controlled motion, to project a light pattern on different portions of the scene at different time instances, and an imaging device coupled with the projector, to generate pairs of images (a first image of a pair from a first perspective, and a second image of the pair from a second perspective), of different portions of the scene in response to the projection of the light pattern on respective portions. The apparatus may include a processor coupled with the projector and the imaging device, to control the motion of the projector, and generate the depth image of the object in the scene, based on processing of the generated pairs of images of the portions of the scene. Other embodiments may be described and claimed.