Patent classifications
H04N13/271
Sensing on UAVs for mapping and obstacle avoidance
Structured light approaches utilize a laser to project features, which are then captured with a camera. By knowing the disparity between the laser emitter and the camera, the system can triangulate to find the range. Four, 185 degree field-of-view cameras provide overlapping views over nearly the whole unit sphere. The cameras are separated from each other to provide parallax. A near-infrared laser projection unit sends light out into the environment, which is reflected and viewed by the cameras. The laser projection system will create vertical lines, while the cameras will be displaced from each other horizontally. This relative shift of the lines, as viewed by different cameras, enables the lines to be triangulated in 3D space. At each point in time, a vertical stripe of the world will be triangulated. Over time, the laser line will be rotated over all yaw angles to provide full a 360 degree range.
Sensing on UAVs for mapping and obstacle avoidance
Structured light approaches utilize a laser to project features, which are then captured with a camera. By knowing the disparity between the laser emitter and the camera, the system can triangulate to find the range. Four, 185 degree field-of-view cameras provide overlapping views over nearly the whole unit sphere. The cameras are separated from each other to provide parallax. A near-infrared laser projection unit sends light out into the environment, which is reflected and viewed by the cameras. The laser projection system will create vertical lines, while the cameras will be displaced from each other horizontally. This relative shift of the lines, as viewed by different cameras, enables the lines to be triangulated in 3D space. At each point in time, a vertical stripe of the world will be triangulated. Over time, the laser line will be rotated over all yaw angles to provide full a 360 degree range.
USER INTERFACE FOR CAMERA EFFECTS
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
Multi-camera image capture system
A dual-camera image capture system may include a first light source, disposed above a target area, a first mobile unit, configured to rotate around the target area, and a second mobile unit, operatively coupled to the first mobile unit, configured to move vertically along the first mobile unit. The dual-camera image capture system may further include a second light source, operatively coupled to the second mobile unit and a dual-camera unit, operatively coupled to the second mobile unit. The dual-camera image capture system may include a first camera configured to capture structural data and a second camera configured to capture color data. The first mobile unit and the second mobile unit may be configured to move the first camera and the second camera to face the target area in a variety of positions around the target area.
Multi-camera image capture system
A dual-camera image capture system may include a first light source, disposed above a target area, a first mobile unit, configured to rotate around the target area, and a second mobile unit, operatively coupled to the first mobile unit, configured to move vertically along the first mobile unit. The dual-camera image capture system may further include a second light source, operatively coupled to the second mobile unit and a dual-camera unit, operatively coupled to the second mobile unit. The dual-camera image capture system may include a first camera configured to capture structural data and a second camera configured to capture color data. The first mobile unit and the second mobile unit may be configured to move the first camera and the second camera to face the target area in a variety of positions around the target area.
System for capturing a plantar image
A system for capturing a plantar image is provided, which is based on a depth sensor or 3D image capturing device connected to a portable structure having: first legs; a first frame that holds a glass pane; and an elastic membrane on which a patient places his or her foot, making contact with the glass pane positioned there below, in the manner of a floor, to capture the plantar image by means of the depth sensor disposed beneath the glass pane. Respective platforms provided with a pair of second legs can be joined to the first frame. A system is provided that includes a transparent floor in a portable structure that allows the assembly to be transported so as to be able to capture the patient's plantar image in any desired place.
DEPTH MAP GENERATION METHOD, AND DEVICE AND STORAGE MEDIUM
Provided are a depth map generation method, a device and a storage medium, which belong to the technical field of image processing. The method includes: generating, according to a first spherical image acquired by a first fisheye lens and a second spherical image acquired by a second fisheye lens, a first disparity map of a spatial region where a terminal device is located; generating a second disparity map of the spatial region according to depth information of the spatial region acquired by an active depth sensor; and generating a target depth map of the spatial region according to the first disparity map and the second disparity map.
Systems and methods for depth estimation using semantic features
Systems, methods, and other embodiments described herein relate to generating depth estimates of an environment depicted in a monocular image. In one embodiment, a method includes identifying semantic features in the monocular image according to a semantic model. The method includes injecting the semantic features into a depth model using pixel-adaptive convolutions. The method includes generating a depth map from the monocular image using the depth model that is guided by the semantic features. The pixel-adaptive convolutions are integrated into a decoder of the depth model. The method includes providing the depth map as the depth estimates for the monocular image.
Systems and methods for depth estimation using semantic features
Systems, methods, and other embodiments described herein relate to generating depth estimates of an environment depicted in a monocular image. In one embodiment, a method includes identifying semantic features in the monocular image according to a semantic model. The method includes injecting the semantic features into a depth model using pixel-adaptive convolutions. The method includes generating a depth map from the monocular image using the depth model that is guided by the semantic features. The pixel-adaptive convolutions are integrated into a decoder of the depth model. The method includes providing the depth map as the depth estimates for the monocular image.
Non-rigid stereo vision camera system
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.