Patent classifications
H04N13/271
SYSTEMS AND METHODS FOR AN IMPROVED CAMERA SYSTEM USING FILTERS AND MACHINE LEARNING TO ESTIMATE DEPTH
System, methods, and other embodiments described herein relate to estimating depth using a machine learning (ML) model. In one embodiment, a method includes acquiring image data according to criteria from a detector that uses a lens to resolve multiple angles of light per section of the detector. The method also includes mapping a kernel to the image data according to a view associated with the section and a size of the kernel. The method also includes processing the image data using the ML model to produce the depth according to the size of the kernel.
System and method for rendering free viewpoint video for sport applications
Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
System and method for rendering free viewpoint video for sport applications
Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
Dynamic vision sensor and projector for depth imaging
Systems, devices, and techniques related to matching features between a dynamic vision sensor and one or both of a dynamic projector or another dynamic vision sensor are discussed. Such techniques include casting a light pattern with projected features having differing temporal characteristics onto a scene and determining the correspondence(s) based on matching changes in detected luminance and temporal characteristics of the projected features.
Dynamic vision sensor and projector for depth imaging
Systems, devices, and techniques related to matching features between a dynamic vision sensor and one or both of a dynamic projector or another dynamic vision sensor are discussed. Such techniques include casting a light pattern with projected features having differing temporal characteristics onto a scene and determining the correspondence(s) based on matching changes in detected luminance and temporal characteristics of the projected features.
Method and system for generating an image of a subject from a viewpoint of a virtual camera for a head-mountable display
A method and system of generating an image of a subject from the viewpoint of a virtual camera includes obtaining a plurality of source images of a subject and pose data for each source camera or cameras that captured the source images. Virtual camera pose data is also obtained indicating a pose of a virtual camera relative to the subject. Each source image is distorted based on a difference in pose of the corresponding source camera and the pose of the virtual camera. A weighting is determined for each distorted image based on a similarity between the pose of the corresponding source camera and the pose of the virtual camera. The distorted images are then blended together in accordance with the weightings to form an image of the subject from the viewpoint of the virtual camera.
Method and system for generating an image of a subject from a viewpoint of a virtual camera for a head-mountable display
A method and system of generating an image of a subject from the viewpoint of a virtual camera includes obtaining a plurality of source images of a subject and pose data for each source camera or cameras that captured the source images. Virtual camera pose data is also obtained indicating a pose of a virtual camera relative to the subject. Each source image is distorted based on a difference in pose of the corresponding source camera and the pose of the virtual camera. A weighting is determined for each distorted image based on a similarity between the pose of the corresponding source camera and the pose of the virtual camera. The distorted images are then blended together in accordance with the weightings to form an image of the subject from the viewpoint of the virtual camera.
ENDOSCOPE SYSTEM
An endoscope system includes an endoscope that captures a living tissue in a body cavity, and an image processing unit. The endoscope includes an objective lens provided on a front side of a light receiving surface of an image sensor and configured to simultaneously form images of the living tissue, obtained through a plurality of windows, on the light receiving surface as the captured image. The image processing unit includes a three-dimensional expansion processor configured to calculate different directions of a feature part visible through the plurality of windows based on position information in each of images of the feature part, which is distinguishably identified from other parts and included in common in the plurality of images obtained through the plurality of windows in the captured image captured by the endoscope, and to expand two-dimensional information of the images of the feature part to three-dimensional information.
METHOD AND APPARATUS FOR EVALUATION AND THERAPEUTIC RELAXATION OF EYES
A base acoustic emission is directed at an eye, and a return acoustic emission is detected from the eye. Base and return acoustic emissions are compared and differences evaluated to determine a descriptor of intraocular pressure. Also, stereo content is provided to a viewer with vergence depth and/or other features selected to bias the eyes towards therapeutically useful positions, movements, focuses, etc. and/or away from harmful positions, movements, focuses, etc. Therapy may facilitate improvements in eye health through reduction of intraocular pressure, reducing mechanical insult to the optic nerve and/or other structures, reducing muscle strain in eye orientation muscles, reducing forces applied to the eye lens and/or within the associated muscles and ligaments, etc., to benefit glaucoma, myopia, etc. Stereo targets may be presented to align the eyes for other testing. Eye alignment, testing, and/or motion treatment may be combined as an “end-to-end” process.
METHOD AND APPARATUS FOR EVALUATION AND THERAPEUTIC RELAXATION OF EYES
A base acoustic emission is directed at an eye, and a return acoustic emission is detected from the eye. Base and return acoustic emissions are compared and differences evaluated to determine a descriptor of intraocular pressure. Also, stereo content is provided to a viewer with vergence depth and/or other features selected to bias the eyes towards therapeutically useful positions, movements, focuses, etc. and/or away from harmful positions, movements, focuses, etc. Therapy may facilitate improvements in eye health through reduction of intraocular pressure, reducing mechanical insult to the optic nerve and/or other structures, reducing muscle strain in eye orientation muscles, reducing forces applied to the eye lens and/or within the associated muscles and ligaments, etc., to benefit glaucoma, myopia, etc. Stereo targets may be presented to align the eyes for other testing. Eye alignment, testing, and/or motion treatment may be combined as an “end-to-end” process.