H04N13/271

Animated stereoscopic illusionary therapy and entertainment
11779203 · 2023-10-10 ·

Illusionary motion-in-depth for vision therapy or entertainment is provided by altering the positional disparity by relative speed of movement and/or delay of movement of images of an image pair in motion which are viewed on a digital display device by stereoscopic means.

Animated stereoscopic illusionary therapy and entertainment
11779203 · 2023-10-10 ·

Illusionary motion-in-depth for vision therapy or entertainment is provided by altering the positional disparity by relative speed of movement and/or delay of movement of images of an image pair in motion which are viewed on a digital display device by stereoscopic means.

METHOD FOR PROCESSING IMAGES, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20230326029 · 2023-10-12 ·

A method for processing images implemented in an electronic device includes obtaining images during moving of a vehicle; obtaining instance segmentation images by segmenting the images; obtaining a predicted disparity map by reconstructing the left images based on a pre-established autoencoder; generating a first error value of the autoencoder for the images according to the left image, the predicted disparity map, and the right image, generating a second error value of the autoencoder for the instance segmentation image according to the left image of instance segmentation, the predicted disparity map, and the right image of instance segmentation; establishing an autoencoder model by adjusting the autoencoder according to the first error value and the second error value; obtaining a test image as the vehicle is moving, and obtaining a target disparity map; and obtaining a depth image corresponding to the test image by converting the target disparity map.

METHOD FOR PROCESSING IMAGES, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20230326029 · 2023-10-12 ·

A method for processing images implemented in an electronic device includes obtaining images during moving of a vehicle; obtaining instance segmentation images by segmenting the images; obtaining a predicted disparity map by reconstructing the left images based on a pre-established autoencoder; generating a first error value of the autoencoder for the images according to the left image, the predicted disparity map, and the right image, generating a second error value of the autoencoder for the instance segmentation image according to the left image of instance segmentation, the predicted disparity map, and the right image of instance segmentation; establishing an autoencoder model by adjusting the autoencoder according to the first error value and the second error value; obtaining a test image as the vehicle is moving, and obtaining a target disparity map; and obtaining a depth image corresponding to the test image by converting the target disparity map.

User interface for capturing photos with different camera magnifications

The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.

User interface for capturing photos with different camera magnifications

The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.

Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
11164394 · 2021-11-02 · ·

The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system operatively coupled to a processor, a two-dimensional image, and determining, by the system, auxiliary data for the two-dimensional image, wherein the auxiliary data comprises orientation information regarding a capture orientation of the two-dimensional image. The method further comprises, deriving, by the system, three-dimensional information for the two-dimensional image using one or more neural network models configured to infer the three-dimensional information based on the two-dimensional image and the auxiliary data.

Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
11164394 · 2021-11-02 · ·

The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system operatively coupled to a processor, a two-dimensional image, and determining, by the system, auxiliary data for the two-dimensional image, wherein the auxiliary data comprises orientation information regarding a capture orientation of the two-dimensional image. The method further comprises, deriving, by the system, three-dimensional information for the two-dimensional image using one or more neural network models configured to infer the three-dimensional information based on the two-dimensional image and the auxiliary data.

Camera system with complementary pixlet structure

A camera system with a complementary pixlet structure and a method of operating the same are provided. The camera system includes an image sensor that includes at least one 2×2 pixel block including a first pixel, a second pixel, and two third pixels—the two third pixels are disposed at positions diagonal to each other in the 2×2 pixel block and include deflected small pixlets, which are deflected in opposite directions to be symmetrical to each other with respect to each pixel center, and large pixlets adjacent to the deflected small pixlets, respectively, and each pixlet includes an photodiode converting an optical signal to an electrical signal and a depth calculator that receives images acquired from the deflected small pixlets of the two third pixels and calculates a depth between the image sensor and an object using a parallax between the images.

ESTIMATING A CONDITION OF A PHYSICAL STRUCTURE

In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with an unmanned aerial vehicle (UAV). Data collected by the UAV corresponding to points on a surface of a structure is received and a 3D point cloud is generated for the structure, where the 3D point cloud is generated based at least in part on the received UAV data. A 3D model of the surface of the structure is reconstructed using the 3D point cloud.