Patent classifications
H04N13/271
Apparatus for generating depth image
An apparatus for generating depth image is provided, the apparatus according to an exemplary embodiment of the present disclosure being configured to perform an accurate stereo matching even in a low light level by obtaining RGB images and/or IR images, and using the obtained RGB images and/or IR images to extraction of a depth image.
Display device, control method, and control program for stereoscopically displaying three-dimensional object
According to one of aspects, a display device includes: a display unit configured to three-dimensionally display a planar object for executing processings related to spreadsheet, by displaying images respectively corresponding to both eyes of a user by being worn; a storage unit configured to store a rule in which operation to the object and a processing included in the processings are associated with each other; a detection unit configured to detect the operation; and a control unit configured to determine the processing to be executed according to the operation detected by the detection unit, based on the rule.
Projector for projecting visible and non-visible images
A projector that projects a visible image as well as a non-visible image. The non-visible image might be used for any purpose, but an example is in order to provide depth information regarding physical item(s) interacting with the projected visible image. The projector includes multiple projecting units (e.g., one for each pixel to be displayed), each including light-emitting elements configured to emit light in the visible spectrum. Some or all of those projecting units might also include an emitting element for emitting light in the non-visible spectrum so as to collectively emit a non-visible image. Optics may be positioned to project the visible image and the non-visible image. A depth sensing module detects depth of surfaces within the scope of the non-visible image using a reflected portion of the non-visible image.
Camera for measuring depth image and method of measuring depth image using the same
Provided are a depth camera and methods of measuring a depth image by using the depth camera. The depth camera is a time-of-flight (TOF) depth camera including: an illumination device that illuminates a patterned light to an object; a filter unit that reduces noise light included in light reflected by the object; and an image sensor that provides a depth image of the object by receiving light that enters through the filter unit. The illumination device includes: a light source; and a patterned light generator that changes the light emitted from the light source into the patterned light. The filter unit includes a band pass filter and an optical modulator. The patterned light generator may be a diffractive optical element or a refractive optical element.
3D depth sensor and projection system and methods of operating thereof
A diffractive optical element includes: a first facet configured to perform an expansion optical function; and a second facet configured to perform a collimation optical function and a pattern generation function.
SUBSURFACE IMAGING AND DISPLAY OF 3D DIGITAL IMAGE AND 3D IMAGE SEQUENCE
To simulate a 3D image of a subsurface below a surface, the system having a memory device for storing an instruction, a processor in communication with the memory device configured to execute the instruction, and a subsurface image capture module in communication with the processor, the subsurface image capture module having one or more wave generating device and one or more sensor affixed to a vehicle to capture a series of digital image datasets of the subsurface with a coordinate reference data, wherein the processor executes an instruction to generate a digital model of the series of digital image datasets of the subsurface while maintaining the coordinate reference data, wherein the processor executes an instruction to determine a depth map of the digital model, and wherein the processor executes an instruction to identify a key subject point in the digital model, where subsurface includes an internal biology, below ground, underwater.
GAIN MAP GENERATION WITH ROTATION COMPENSATION
A method includes obtaining multiple input images of a scene based on image data captured using multiple imaging sensors. The method also includes generating a gain map identifying relative gains of the imaging sensors. The gain map is generated using the input images and translational and rotational offsets between one or more pairs of the input images. Generating the gain map may include using, for each pair of the input images, a rotation matrix based on a rotation angle between the pair of the input images. The method may further include using the gain map to process additional image data captured using the imaging sensors.
GAIN MAP GENERATION WITH ROTATION COMPENSATION
A method includes obtaining multiple input images of a scene based on image data captured using multiple imaging sensors. The method also includes generating a gain map identifying relative gains of the imaging sensors. The gain map is generated using the input images and translational and rotational offsets between one or more pairs of the input images. Generating the gain map may include using, for each pair of the input images, a rotation matrix based on a rotation angle between the pair of the input images. The method may further include using the gain map to process additional image data captured using the imaging sensors.
COMBINING DEPTH AND THERMAL INFORMATION FOR OBJECT DETECTION AND AVOIDANCE
Described is an imaging component for use by an unmanned aerial vehicle (“UAV”) for object detection. As described, the imaging component includes one or more cameras that are configured to obtain images of a scene using visible light that are converted into a depth map (e.g., stereo image) and one or more other cameras that are configured to form images, or thermograms, of the scene using infrared radiation (“IR”). The depth information and thermal information are combined to form a representation of the scene based on both depth and thermal information.
GENERATING TEXTURED THREE-DIMENSIONAL MESHES USING TWO-DIMENSIONAL SCANNER AND PANORAMIC CAMERA
Techniques are described for converting a 2D map into a 3D mesh. The 2D map of the environment is generated using data captured by a 2D scanner. Further, a set of features is identified from a subset of panoramic images of the environment that are captured by a camera. Further, the panoramic images from the subset are aligned with the 2D map using the features that are extracted. Further, 3D coordinates of the features are determined using 2D coordinates from the 2D map and a third coordinate based on a pose of the camera. The 3D mesh is generated using the 3D coordinates of the features.