Patent classifications
H04N13/271
Systems and methods for self-supervised depth estimation according to an arbitrary camera
System, methods, and other embodiments described herein relate to improving depth estimates for monocular images using a neural camera model that is independent of a camera type. In one embodiment, a method includes receiving a monocular image from a pair of training images derived from a monocular video. The method includes generating, using a ray surface network, a ray surface that approximates an image character of the monocular image as produced by a camera having the camera type. The method includes creating a synthesized image according to at least the ray surface and a depth map associated with the monocular image.
User interface for camera effects
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
User interface for camera effects
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
Camera
A camera according to an embodiment of the present invention comprises: a light-emitting module for configured to output output light according to a set control mode; a light-receiving module configured to receive input light corresponding to the output light according to the control mode; and a control module configured to detect at least one of presence of a subject and a distance from the subject on the basis of the input light, reset the control mode according to a detection result, control an output of the light-emitting module and an input of the light-receiving module according to the reset control mode, and generate a depth map for the subject on the basis of the input light which is input according to the reset control mode.
Camera
A camera according to an embodiment of the present invention comprises: a light-emitting module for configured to output output light according to a set control mode; a light-receiving module configured to receive input light corresponding to the output light according to the control mode; and a control module configured to detect at least one of presence of a subject and a distance from the subject on the basis of the input light, reset the control mode according to a detection result, control an output of the light-emitting module and an input of the light-receiving module according to the reset control mode, and generate a depth map for the subject on the basis of the input light which is input according to the reset control mode.
METHODS, SYSTEMS, AND MEDIA FOR GENERATING AND RENDERING IMMERSIVE VIDEO CONTENT
Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.
METHODS, SYSTEMS, AND MEDIA FOR GENERATING AND RENDERING IMMERSIVE VIDEO CONTENT
Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.
ACTIVE STEREO DEPTH PREDICTION BASED ON COARSE MATCHING
An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.
ACTIVE STEREO DEPTH PREDICTION BASED ON COARSE MATCHING
An electronic device estimates a depth map of an environment based on matching reduced-resolution stereo depth images captured by depth cameras to generate a coarse disparity (depth) map. The electronic device downsamples depth images captured by the depth cameras and matches sections of the reduced-resolution images to each other to generate a coarse depth map. The electronic device upsamples the coarse depth map to a higher resolution and refines the upsampled depth map to generate a high-resolution depth map to support location-based functionality.
APPARATUS AND METHOD FOR FOCAL LENGTH ADJUSTMENT AND DEPTH MAP DETERMINATION
A method for focal length adjustment includes capturing scene images of a scene using a first imaging device and a second imaging device of an imaging mechanism, determining a distance between an object of interest in the scene and the imaging mechanism based on the scene images of the scene, and automatically adjusting a focal length of the imaging mechanism according to the distance.