H04N13/271

Enhanced 3D audio/video processing apparatus and method

The enhanced 3D audio/video processing apparatus according to one embodiment of the present invention may comprise: a three-dimensional (3D) content generating unit for generating 3D content including video content and analog content; a depth information generating unit for generating depth information for the video frames constituting the video content; and a signal generating unit for generating a 3D enhanced signal including the generated 3D content and the depth information. Further, the enhanced 3D audio/video processing apparatus according to another embodiment of the present invention may comprise: a signal processing unit for processing the 3D enhanced signal including the 3D content including the video content and the audio content: a depth information extraction unit for acquiring the depth information of the video frames constituting the video content from the processed 3D enhanced signal; a 3D audio effect generating unit for generating 3D audio effect based on the acquired depth information; and a 3D audio content generating unit for generating 3D audio content by applying the generated 3D audio effect.

Information processing apparatus and information processing method for stereo imaging based on corrected parameters
11240484 · 2022-02-01 · ·

The present disclosure relates to an information processing apparatus, an information processing method, and a program capable of obtaining the accuracy of the parameter regarding the attitude of a plurality of viewpoints with higher accuracy. Provided is an information processing apparatus including: a plane estimator configured to estimate a plane on the basis of a first depth map, the first depth map being obtained on the basis of a plurality of captured images acquired by image-capturing at a plurality of viewpoints; and a correction unit configured to correct a parameter regarding an attitude of the plurality of viewpoints on the basis of comparison between the first depth map and the plane.

Information processing apparatus and information processing method for stereo imaging based on corrected parameters
11240484 · 2022-02-01 · ·

The present disclosure relates to an information processing apparatus, an information processing method, and a program capable of obtaining the accuracy of the parameter regarding the attitude of a plurality of viewpoints with higher accuracy. Provided is an information processing apparatus including: a plane estimator configured to estimate a plane on the basis of a first depth map, the first depth map being obtained on the basis of a plurality of captured images acquired by image-capturing at a plurality of viewpoints; and a correction unit configured to correct a parameter regarding an attitude of the plurality of viewpoints on the basis of comparison between the first depth map and the plane.

Image assessment device, method, and computer readable medium for 3-dimensional measuring and capturing of image pair range
09721346 · 2017-08-01 · ·

The present invention provides an image assessment device capable of accurately and promptly assessing an image pair used for 3D measurement from plural captured images. An image assessment device according to the invention includes first captured image selection device, first captured image information acquisition device, object distance to-be-measured acquisition device, object position to-be-measured calculation device, second captured image selection device, second captured image information acquisition device, imaging range calculation device, and assessment device that determines whether or not a calculated object position to be measured is within a calculated imaging range, and assesses that a first captured image and a second captured image are of an image pair if determining that the calculated object position to be measured is within the calculated imaging range.

Roof scan using unmanned aerial vehicle

Described herein are systems for roof scan using an unmanned aerial vehicle. For example, some methods include capturing, using an unmanned aerial vehicle, an overview image of a roof of a building from above the roof; presenting a suggested bounding polygon overlaid on the overview image to a user; determining a bounding polygon based on the suggested bounding polygon and user edits; based on the bounding polygon, determining a flight path including a sequence of poses of the unmanned aerial vehicle with respective fields of view at a fixed height that collectively cover the bounding polygon; fly the unmanned aerial vehicle to a sequence of scan poses with horizontal positions matching respective poses of the flight path and vertical positions determined to maintain a consistent distance above the roof; and scanning the roof from the sequence of scan poses to generate a three-dimensional map of the roof.

SMART REFRIGERATED COUNTER SYSTEM
20170278247 · 2017-09-28 ·

A refrigerated counter system includes a refrigerated counter, 3D filming means arranged in such manner to take 3D images of the products contained in the refrigerated counter and of an area in front of the refrigerated counter, a PC connected to the 3D filming means to process statistic data according to the 3D images taken by the 3D filming means, and a CPU connected to the PC to control the operating parts of the refrigerated counter according to the statistic data processed by the PC.

Vision system with automatic teat detection

A system including a 3D camera, memory, and a processor. The processor is configured to obtain the 3D image, identify one or more regions within the 3D image comprising depth values greater than a depth value threshold, and apply the thigh gap detection rule set to identify a thigh gap region. The processor is further configured to demarcate an access region within the thigh gap region, demarcate a teat detection region, partition the 3D image within the teat detection region to generate a plurality of image depth planes, and examine each of the plurality of image depth planes. The processor is further configured to identify one or more teat candidates within the image depth plane, apply the teat detection rule set to the one or more teat candidates to identify one or more teats, and determine position information for the one or more teats.

Method for performing out-focus using depth information and camera using the same

A camera and a method for extracting depth information by the camera having a first lens and a second lens are provided. The method includes photographing, by the first lens, a first image; photographing, by the second lens, a second image of a same scene; down-sampling the first image to a resolution of the second image if the first image is an image having a higher resolution than a resolution of the second image; correcting the down-sampled first image to match the down-sampled first image to the second image; and extracting the depth information from the corrected down-sampled first image and the second image.

Image pickup information output apparatus and lens apparatus equipped with same
09817207 · 2017-11-14 · ·

Image pickup information output apparatus which outputs information about image pickup condition derived from combination of positions/states of condition decision members serving as optical members that affect fulfillment of the condition, comprising: setting unit for setting a condition setting value as the condition to be fulfilled; controller for driving one of the condition decision members to control its position/state based on the condition setting value, condition calculator for calculating information about the condition as calculated condition based on the combination of positions/states of the condition decision members; determination unit for determining whether or not the condition setting value changed; decision unit for determining the information about condition to be output, based on the calculated condition and the determination made by the determination unit as to whether or not the condition setting value changed; and output unit for outputting information about the condition to be output determined by the decision unit.

Simulating an Infrared Emitter Array in a Video Monitoring Camera to Construct a Lookup Table for Depth Determination
20170270681 · 2017-09-21 ·

A process generates a lookup table to estimate spatial depth in a visual scene. The process identifies subsets of illuminators of a camera system with image sensors and illuminators. The image sensors are associated with multiple pixels. For each pixel, and for each of multiple depths from the pixel, the process simulates a virtual surface at the depth. For each subset of the subsets of illuminators, the process simulates illumination of the virtual surface from the subset and determines an expected light intensity at the pixel from light reflected from the virtual surface due to the simulated illumination. The process forms intensity information from the expected light intensity determined for the pixel for each of the depths and each of the subsets. The process constructs a lookup table comprising the intensity information. The lookup table associates the intensity information for each pixel with the respective depth and the respective subset.