Patent classifications
H04N13/271
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
A free viewpoint video generation unit (24) (generation unit) of an information processing apparatus (10a) generates a free viewpoint video (J) for viewing a 3D model (90M) (3D object) superimposed on background information (92) from an arbitrary viewpoint position. Then, a shadow application unit (27) generates a shadow (94) of a light source generated on the 3D model (90M) according to the viewpoint position based on light source information (93) indicating the position of the light source related to the background information (92) and the direction of the light beam emitted by the light source, depth information (D) (three-dimensional information) of the 3D model (90M), and the viewpoint position, and applies the generated shadow to the free viewpoint video (J).
TOF CAMERA
A time of flight (ToF) camera according to embodiments of the present invention includes a light source unit including an infrared light-emitting device array and configured to generate a light signal; a lens unit disposed on the light source unit and including a plurality of lenses; and an adjustment unit configured to adjust the lens unit such that a light pattern of the light signal, which has passed through the lens unit, becomes a surface lighting or a spot lighting including a plurality of spot patterns, wherein the lens unit has a distortion aberration in the form of barrel distortion in which irradiance of the light pattern decreases in a direction away from a central portion.
ASYMMETRIC COMMUNICATION SYSTEM WITH VIEWER POSITION INDICATIONS
Communication methods, systems and computer program products (“software”) (1) facilitate virtual immersion of one or more remote viewing participants into a captured scene which may include any number of physically present participants, and (2) provide indications of the remote viewing participants that can be seen or discerned by the physically present participants.
Method and apparatus for obtaining binocular panoramic image, and storage medium
A method for obtaining a binocular panoramic image is performed at an electronic apparatus, including: obtaining first and second panoramic images acquired by two panoramic cameras; obtaining at least one group of a first pixel located in the first panoramic image and a second pixel located in the second panoramic image; calculating a distance between the first pixel and the second pixel in each group, and obtaining depth information corresponding to the two panoramic image according to the distance between the first pixel and the second pixel; converting the first panoramic image and the second panoramic image into a first monocular panoramic image and a second monocular panoramic image, respectively in accordance with the corresponding depth information and a preset pupil distance between a first eye and a second eye; and combining a display of the first monocular panoramic image and the second monocular panoramic image in corresponding display regions.
Method and apparatus for obtaining binocular panoramic image, and storage medium
A method for obtaining a binocular panoramic image is performed at an electronic apparatus, including: obtaining first and second panoramic images acquired by two panoramic cameras; obtaining at least one group of a first pixel located in the first panoramic image and a second pixel located in the second panoramic image; calculating a distance between the first pixel and the second pixel in each group, and obtaining depth information corresponding to the two panoramic image according to the distance between the first pixel and the second pixel; converting the first panoramic image and the second panoramic image into a first monocular panoramic image and a second monocular panoramic image, respectively in accordance with the corresponding depth information and a preset pupil distance between a first eye and a second eye; and combining a display of the first monocular panoramic image and the second monocular panoramic image in corresponding display regions.
Method and apparatus for generating three-dimensional (3D) road model
A method for generating a three-dimensional (3D) lane model, the method including calculating a free space indicating a driving-allowed area based on a driving image captured from a vehicle camera, generating a dominant plane indicating plane information of a road based on either or both of depth information of the free space and a depth map corresponding to a front of the vehicle, and generating a 3D short-distance road model based on the dominant plane.
Method and apparatus for generating three-dimensional (3D) road model
A method for generating a three-dimensional (3D) lane model, the method including calculating a free space indicating a driving-allowed area based on a driving image captured from a vehicle camera, generating a dominant plane indicating plane information of a road based on either or both of depth information of the free space and a depth map corresponding to a front of the vehicle, and generating a 3D short-distance road model based on the dominant plane.
System and method for rendering free viewpoint video for studio applications
Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
System and method for rendering free viewpoint video for studio applications
Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.