Patent classifications
H04N13/271
Detection and ranging based on a single monoscopic frame
One or more stereoscopic images are generated based on a single monoscopic image that may be obtained from a camera sensor. Each stereoscopic image includes a first digital image and a second digital image that, when viewed using any suitable stereoscopic viewing technique, result in a user or software program receiving a three-dimensional effect with respect to the elements included in the stereoscopic images. The monoscopic image may depict a geographic setting of a particular geographic location and the resulting stereoscopic image may provide a three-dimensional (3D) rendering of the geographic setting. Use of the stereoscopic image helps a system obtain more accurate detection and ranging capabilities. The stereoscopic image may be any configuration of the first digital image (monoscopic) and the second digital image (monoscopic) that together may generate a 3D effect as perceived by a viewer or software program.
MULTI-DIMENSIONAL DATA CAPTURE OF AN ENVIRONMENT USING PLURAL DEVICES
Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.
MULTI-DIMENSIONAL DATA CAPTURE OF AN ENVIRONMENT USING PLURAL DEVICES
Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.
THREE-DIMENSIONAL CONTOURED SCANNING PHOTOACOUSTIC IMAGING AND VIRTUAL STAINING
Methods, devices, apparatus, and systems for three-dimensional (3D) contoured scanning photoacoustic imaging and/or deep learning virtual staining.
THREE-DIMENSIONAL CONTOURED SCANNING PHOTOACOUSTIC IMAGING AND VIRTUAL STAINING
Methods, devices, apparatus, and systems for three-dimensional (3D) contoured scanning photoacoustic imaging and/or deep learning virtual staining.
Solid-state imaging device and electronic camera
A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.
Solid-state imaging device and electronic camera
A solid-state imaging device includes a second image sensor having an organic photoelectric conversion film transmitting a specific light, and a first image sensor which is stacked in layers on a same semiconductor substrate as that of the second image sensor and which receives the specific light having transmitted the second image sensor, in which a pixel for focus detection is provided in the second image sensor or the first image sensor. Therefore, an AF method can be realized independently of a pixel for imaging.
Methods, systems, and media for generating and rendering immersive video content
Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.
Methods, systems, and media for generating and rendering immersive video content
Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.
Active stereo depth prediction based on coarse matching
An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.