H04N13/271

ELECTRONIC DEVICE APPLYING BOKEH EFFECT TO IMAGE AND OPERATING METHOD THEREOF

According to certain embodiments, an electronic device comprises: a motion sensor; a first camera module including a lens assembly and a driving circuit configured to move the lens assembly in a direction substantially perpendicular to an optical axis, the first camera module having a first angle of view when the lens assembly is positioned in a reference position; a second camera module having a second angle of view, wherein the first angle of view is entirely in the second angle of view; and at least one processor electrically connected to the motion sensor, the first camera module, and the second camera module, wherein the at least one processor is configured to: control the driving circuit to move the lens assembly based on motion data received from the motion sensor, thereby causing the first camera module to have a third angle of view, offset from the first angle of view by an angle, acquire, from the first camera module, a first image corresponding to the third angle of view, acquire a second image corresponding to the second angle of view from the second camera module, acquire depth information for the first image based on the second image and the motion data, and apply a bokeh effect to the first image based on the depth information.

WAFER LEVEL OPTICS FOR FOLDED OPTIC PASSIVE DEPTH SENSING SYSTEM
20170359568 · 2017-12-14 ·

Certain aspects relate to wafer level optical designs for a folded optic stereoscopic imaging system. One example folded optical path includes first and second reflective surfaces defining first, second, and third optical axes, and where the first reflective surface redirects light from the first optical axis to the second optical axis and where the second reflective surface redirects light from the second optical axis to the third optical axis. Such an example folded optical path further includes wafer-level optical stacks providing ten lens surfaces distributed along the first and second optical axes. A variation on the example folded optical path includes a prism having the first reflective surface, wherein plastic lenses are formed in or secured to the input and output surfaces of the prism in place of two of the wafer-level optical stacks.

FOLDED OPTIC PASSIVE DEPTH SENSING SYSTEM

Certain aspects relate to systems and techniques for folded optic stereoscopic imaging, wherein a number of folded optic paths each direct a different one of a corresponding number of stereoscopic images toward a portion of a single image sensor. Each folded optic path can include a set of optics including a first light folding surface positioned to receive light propagating from a scene along a first optical axis and redirect the light along a second optical axis, a second light folding surface positioned to redirect the light from the second optical axis to a third optical axis, and lens elements positioned along at least the first and second optical axes and including a first subset having telescopic optical characteristics and a second subset lengthening the optical path length. The sensor can be a three-dimensionally stacked backside illuminated sensor wafer and reconfigurable instruction cell array processing wafer that performs depth processing.

Device for extracting depth information and method thereof

A device for extracting depth information according to one embodiment of the present invention includes: a light outputting unit for outputting IR (Infrared) light; a light inputting unit for inputting light reflected from an object after outputting from the light outputting unit; a light adjusting unit for adjusting the angle of the light so as to radiate the light into a first area including the object, and then for adjusting the angle of the light so as to radiate the light into a second area; and a controlling unit for estimating the motion of the object by using at least one of the lights between the light inputted to the first area and the light inputted to the second area.

Device for extracting depth information and method thereof

A device for extracting depth information according to one embodiment of the present invention includes: a light outputting unit for outputting IR (Infrared) light; a light inputting unit for inputting light reflected from an object after outputting from the light outputting unit; a light adjusting unit for adjusting the angle of the light so as to radiate the light into a first area including the object, and then for adjusting the angle of the light so as to radiate the light into a second area; and a controlling unit for estimating the motion of the object by using at least one of the lights between the light inputted to the first area and the light inputted to the second area.

Coordination of multiple structured light-based 3D image detectors

Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.

Coordination of multiple structured light-based 3D image detectors

Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.

Multi-perspective stereoscopy from light fields

Methods and systems for generating stereoscopic content with granular control over binocular disparity based on multi-perspective imaging from representations of light fields are provided. The stereoscopic content is computed as piecewise continuous cuts through a representation of a light field, minimizing an energy reflecting prescribed parameters such as depth budget, maximum binocular disparity gradient, desired stereoscopic baseline. The methods and systems may be used for efficient and flexible stereoscopic post-processing, such as reducing excessive binocular disparity while preserving perceived depth or retargeting of already captured scenes to various view settings. Moreover, such methods and systems are highly useful for content creation in the context of multi-view autostereoscopic displays and provide a novel conceptual approach to stereoscopic image processing and post-production.

Image generating device, 3D image display system having the same and control methods thereof

The image generating device includes: a first camera configured to photograph a depth image of a subject using a first light; a second camera configured to photograph a color image of the subject by converting a second light into an image signal; a view angle extender configured to change a view angle, wherein the view angle is an angle at which the first camera and the second camera are operable to photograph the subject; and a controller configured to control the view angle extender to change the view angle of the first camera and the second camera and to form a single depth image and a single color image by respectively synthesizing a plurality of depth images and a plurality of color images, photographed by the first camera and the second camera.

Image-enhanced depth sensing using machine learning

Systems and methods are disclosed for training and using neural networks for computing depth maps. One method for training the neural network includes providing an image input to the neural network. The image input may include a camera image of a training scene. The method may also include providing a depth input to the neural network. The depth input may be based on a high-density depth map of the training scene and a sampling mask. The method may further include generating, using the neural network, a computed depth map of the training scene based on the image input and the depth input. The method may further include modifying the neural network based on an error between the computed depth map and the high-density depth map.