Patent classifications
H04N13/271
Medical environment monitoring system
A system and a method are described for monitoring a medical care environment. In one or more implementations, a method includes identifying a first subset of pixels within a field of view of a camera as representing a bed. The method also includes identifying a second subset of pixels within the field of view of the camera as representing an object (e.g., a subject, such as a patient, medical personnel; bed; chair; patient tray; medical equipment; etc.) proximal to the bed. The method also includes determining an orientation of the object within the bed.
STEREOSCOPIC IMAGE GENERATION BOX, STEREOSCOPIC IMAGE DISPLAY METHOD AND STEREOSCOPIC IMAGE DISPLAY SYSTEM
A stereoscopic image generation box, a stereoscopic image display method and a stereoscopic image display system are provided. The stereoscopic image generation box includes an image receiving and detecting unit, a depth information analysis unit, an image processing unit, a synthesis unit and a data transmission unit. The image receiving and detecting unit is used for receiving a two-dimensional image from an image source. The depth information analysis unit is used for obtaining a depth information according to the two-dimensional image. The image processing unit is used for converting the two-dimensional image into a left-eye image and a right-eye image according to the depth information. The synthesizing unit is used for synthesizing the left-eye image and the right-eye image to generate a stereoscopic image. The data transmission unit is used for outputting the stereoscopic image to a display, so that the display can directly display the stereoscopic image.
STEREOSCOPIC IMAGE GENERATION BOX, STEREOSCOPIC IMAGE DISPLAY METHOD AND STEREOSCOPIC IMAGE DISPLAY SYSTEM
A stereoscopic image generation box, a stereoscopic image display method and a stereoscopic image display system are provided. The stereoscopic image generation box includes an image receiving and detecting unit, a depth information analysis unit, an image processing unit, a synthesis unit and a data transmission unit. The image receiving and detecting unit is used for receiving a two-dimensional image from an image source. The depth information analysis unit is used for obtaining a depth information according to the two-dimensional image. The image processing unit is used for converting the two-dimensional image into a left-eye image and a right-eye image according to the depth information. The synthesizing unit is used for synthesizing the left-eye image and the right-eye image to generate a stereoscopic image. The data transmission unit is used for outputting the stereoscopic image to a display, so that the display can directly display the stereoscopic image.
IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
Visual, depth and micro-vibration data extraction using a unified imaging device
A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.
Visual, depth and micro-vibration data extraction using a unified imaging device
A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
METHOD AND APPARATUS FOR COLOUR IMAGING A THREE-DIMENSIONAL STRUCTURE
A device for determining the surface topology and associated color of a structure, such as a teeth segment, includes a scanner for providing depth data for points along a two-dimensional array substantially orthogonal to the depth direction, and an image acquisition means for providing color data for each of the points of the array, while the spatial disposition of the device with respect to the structure is maintained substantially unchanged. A processor combines the color data and depth data for each point in the array, thereby providing a three-dimensional color virtual model of the surface of the structure. A corresponding method for determining the surface topology and associate color of a structure is also provided.