Patent classifications
H04N13/271
Multimodal foreground background segmentation
The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
Multimodal foreground background segmentation
The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
Rendering for multi-focus display systems
Some implementations provide a multi-focus display system that renders images at multiple focus distances for display in conjunction with the use of appropriately powered lenses. For example, an HMD may include a fast switching lens element that allows quickly alternating between two or more focus distances. The displayed images are configured to correspond to the alternating focus distances by adjusting a high-frequency part of the images. This can provide a more natural user experience that will include near objects that require the user's eye to focus on a close focal depth plane and far objects that require the user's eye to focus on a far focal depth plane. Moreover, the user experience can be provided with little or no loss of brightness and without requiring processor and resource intensive computations.
Rendering for multi-focus display systems
Some implementations provide a multi-focus display system that renders images at multiple focus distances for display in conjunction with the use of appropriately powered lenses. For example, an HMD may include a fast switching lens element that allows quickly alternating between two or more focus distances. The displayed images are configured to correspond to the alternating focus distances by adjusting a high-frequency part of the images. This can provide a more natural user experience that will include near objects that require the user's eye to focus on a close focal depth plane and far objects that require the user's eye to focus on a far focal depth plane. Moreover, the user experience can be provided with little or no loss of brightness and without requiring processor and resource intensive computations.
SYSTEMS AND METHODS FOR TELESTRATION WITH SPATIAL MEMORY
An exemplary system is configured to detect user input directing a telestration element to be drawn within an image depicting a surface within a scene; render, based on depth data representative of a depth map for the scene and within a three dimensional (3D) image depicting the surface within the scene, the telestration element; record a 3D position within the scene at which the telestration element is rendered within the 3D image; detect a telestration termination event that removes the telestration element from being rendered within the 3D image; and indicate, subsequent to the telestration termination event, an option to again render the telestration element at the 3D position.
OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, MOBILE OBJECT, AND OBJECT DETECTION METHOD
An object detection device is configured to execute a first process, a second process, and an object detection process (third and fourth processes). The first process estimates a shape of a road surface in a real space on the basis of a first disparity map. The first disparity map is generated on the basis of an output of a stereo camera that captures an image including the road surface, and is a map in which a disparity obtained from the output of the stereo camera is associated with two-dimensional coordinates formed by a first direction corresponding to a horizontal direction of the image captured by the stereo camera and a second direction intersecting the first direction. The second process removes from the first disparity map a disparity for which a height from the road surface in the real space corresponds to a predetermined range on the basis of the estimated shape of the road surface to generate a second disparity map. The object detection process (third and fourth processes) detects an object on the basis of the second disparity map.
OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, MOBILE OBJECT, AND OBJECT DETECTION METHOD
An object detection device is configured to execute a first process, a second process, and an object detection process (third and fourth processes). The first process estimates a shape of a road surface in a real space on the basis of a first disparity map. The first disparity map is generated on the basis of an output of a stereo camera that captures an image including the road surface, and is a map in which a disparity obtained from the output of the stereo camera is associated with two-dimensional coordinates formed by a first direction corresponding to a horizontal direction of the image captured by the stereo camera and a second direction intersecting the first direction. The second process removes from the first disparity map a disparity for which a height from the road surface in the real space corresponds to a predetermined range on the basis of the estimated shape of the road surface to generate a second disparity map. The object detection process (third and fourth processes) detects an object on the basis of the second disparity map.
OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, MOBILE OBJECT, AND OBJECT DETECTION METHOD
An object detection device is configured to execute a road surface detection process, an object disparity determination process, and an object detection process. The road surface detection process estimates a position of a road surface on the basis of a first disparity map. The first disparity map is generated on the basis of an output of a stereo camera and is a map in which a disparity is associated with two-dimensional coordinates formed by a first direction corresponding to a horizontal direction of an image captured by the stereo camera and a second direction intersecting the first direction. The object disparity determination process determines disparities as object disparities when the number of occurrences of each of the disparities for respective coordinate ranges in the first direction of the first disparity map exceeds a predetermined threshold corresponding to the disparity. The object detection process detects an object by converting information on the object disparities into points on an x-z coordinate space and by extracting a group of points.
OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, MOBILE OBJECT, AND OBJECT DETECTION METHOD
An object detection device is configured to execute a road surface detection process, an object disparity determination process, and an object detection process. The road surface detection process estimates a position of a road surface on the basis of a first disparity map. The first disparity map is generated on the basis of an output of a stereo camera and is a map in which a disparity is associated with two-dimensional coordinates formed by a first direction corresponding to a horizontal direction of an image captured by the stereo camera and a second direction intersecting the first direction. The object disparity determination process determines disparities as object disparities when the number of occurrences of each of the disparities for respective coordinate ranges in the first direction of the first disparity map exceeds a predetermined threshold corresponding to the disparity. The object detection process detects an object by converting information on the object disparities into points on an x-z coordinate space and by extracting a group of points.
LASER EMITTER, DEPTH CAMERA AND ELECTRONIC DEVICE
A laser emitter includes an emitting assembly and a laser deflection assembly, wherein the emitting assembly that has a beam outlet, and the beam outlet is configured to emit a laser beam, the laser deflection assembly that is at the beam outlet and is movable relative to the beam outlet, the laser deflection assembly is configured to change an angle of deviation of the laser beam emitted from the beam outlet when the laser deflection assembly is translated relative to the beam outlet, and an included angle is between a translation direction of the laser deflection assembly and a center line of the laser beam emitted from the beam outlet.