Patent classifications
H04N13/271
Accumulating charge from multiple imaging exposure periods
Embodiments related to accumulating charge during multiple exposure periods in a time-of-flight depth camera are disclosed. For example, one embodiment provides a method including accumulating a first charge on a photodetector during a first exposure period for a first light pulse, transferring the first charge to a charge storage mechanism, accumulating a second charge during a second exposure period for the first light pulse, and transferring the second charge to the charge storage mechanism. The method further includes accumulating an additional first charge during a first exposure period for a second light pulse, adding the additional first charge to the first charge to form an updated first charge, accumulating an additional second charge on the photodetector for a second exposure period for the second light pulse, and adding the additional second charge to the second charge to form an updated second charge.
Virtual reality system including social graph
The disclosure includes a system and method for receiving viewing data that describes a location of a first user's gaze while viewing virtual reality content. The method also determining an object of interest in the virtual reality content based on the location of the first user's gaze. The method also includes generating a social network that includes the first user as a member of the social network. The method also includes performing an action in the social network related to the object of interest.
Depth sensor
A depth sensor comprises at least one imaging sensor, at least one multifocal lens, and a focus analyzer. The depth sensor analyzes the in-focus status of electromagnetic radiation, directed by the multifocal lens(es) onto sensing zone(s) of the imaging sensor(s) from spatial zone(s) in a measurement field, to detect the presence of object(s) in the spatial zone(s).
Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera
A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.
Image capture system with calibration function
An image capture system with calibration function includes an image capture device, a laser rangefinder, and a processer. The image capture device captures two images. The processer determines at least one feature point according to the two images, and generates depth information corresponding to each feature point of the at least one feature point according to the two images, wherein the laser rangefinder measures a reference distance corresponding to the each feature point, and the processer optionally calibrates the depth information or the two images according to the reference distance.
Vision guided robot arm and method for operating the same
A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.
Three-dimensional modeling using hemispherical or spherical visible light-depth images
Three-dimensional modeling includes obtaining a hemispherical or spherical visible light-depth image capturing an operational environment of a user device, generating a perspective converted hemispherical or spherical visible light-depth image, generating a three-dimensional model of the operational environment based on the perspective converted hemispherical or spherical visible light-depth image, and outputting the three-dimensional model. Obtaining the hemispherical or spherical visible light-depth image includes obtaining a hemispherical or spherical visual light image and obtaining a hemispherical or spherical non-visual light depth image. Generating the perspective converted hemispherical or spherical visible light-depth image includes generating a perspective converted hemispherical or spherical visual light image and generating a perspective converted hemispherical or spherical non-visual light depth image.
IMAGING DEVICE, IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
A third imaging unit including a pixel not having a polarization characteristic is interposed between a first imaging unit and a second imaging unit including a pixel having a polarization characteristic for each of a plurality of polarization directions. A depth map is generated from a viewpoint of the first imaging unit by matching processing using a first image generated by the first imaging unit and a second image generated by the second imaging unit A normal map is generated on the basis of a polarization state of the first image. Integration processing of the depth map and the normal map is performed and a depth map with a high accuracy is generated. The depth map generated by the map integrating unit is converted into a map from a viewpoint of the third imaging unit, and an image free from deterioration can be generated.
Interferometric structured illumination for depth determination
A depth camera assembly (DCA) has a light source assembly, a mask, a camera assembly, and a controller. The light source assembly includes at least one light source. The mask is configured to generate an interference pattern that is projected into a target area. The mask has two openings configured to pass through light emitted by the at least one light source, and the light passed through the two openings forms an interference pattern across the target area. The interference pattern has a phase based on a position of the light source. The camera assembly is configured to capture images of a portion of the target area that includes the interference pattern. The controller is configured to determine depth information for the portion of the target area based on the captured images.
System And Method For Creating And Sharing A 3D Virtual Model Of An Event
A system and method for creating a 3D virtual model of an event, such as a wedding or sporting event, and for sharing the event with one or more virtual attendees. Virtual attendees connect to the experience platform to view the 3d virtual model of the event on virtual reality glasses, i.e. a head mounted display, from a virtual gallery, preferably from a user selected location and orientation or a common location and orientation for all virtual attendees. In one form the virtual attendees can see and interact with other virtual attendees in the virtual gallery.