Patent classifications
G06T3/0043
SLICING 2D DATA-BASED PATTERN APPLICATION METHOD FOR REDUCING BINDER USAGE AMOUNT IN SAND BINDER JETTING
Provided is a slicing 2D data-based pattern application method for generating an output code in which the inside of a model is filled with a pattern, so as to reduce a binder usage amount and maintain a strength and a shape in sand binder jetting additive manufacturing. A slicing 2D data-based pattern application method according to an embodiment of the present invention comprises the steps of: generating 2D data by slicing an output model; generating an inner pattern in at least one of layers forming an output model in consideration of the set thickness of the layers and the outer thickness thereof and generating an output code by applying the generated inner pattern. Therefore, the cost of producing an additive manufacturing output can be reduced by reducing the binder usage amount through the application of the inner pattern. In addition, reducing the binder usage amount enables a mold to be destroyed with less force than conventionally used, can increase the proportion of molding sand reuse by reducing the binder usage amount, and can reduce recovery treatment costs of the molding sand. Furthermore, casting defects due to increased ventilation can be reduced.
Equatorial stitching of hemispherical images in a spherical image capture system
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
Peripheral inspection system and method
A method of inspecting two or more sides of an object is provided. The method includes generating one set of image data of two or more sides of the object, such as by using spherical mirror segments that project all sides of the object onto a single image and generating an X by Y array of image data of the single image. The projection of the image data is then compensated for, such as by identifying inspection processes to locate defects of the object in the projected image data or by converting the image data from the projected inspection coordinates to Cartesian coordinates. Predetermined inspection processes are then performed on the compensated image data, such as by using the inspection processes that are optimized for use with the projected image data or by converting the projected image data into a Cartesian format and using Cartesian image data inspection processes.
Image processing method and device for projecting image of virtual reality content
The present invention relates to a technology for a sensor network, machine to machine (M2M) communication, machine type communication (MTC), and the Internet of things (IoT). The present invention can be utilized for intelligent services (smart home, smart building, smart city, smart car or connected car, health care, digital education, retail, security and safety-related services, and the like) based on the technology. The present invention relates to an efficient image processing method and device for virtual reality content, and according to one embodiment of the present invention, the image processing method for projecting an image of virtual reality content comprises the steps of: acquiring a first planar image projected by dividing a front part and a rear part of a spherical image for expressing a 360-degree image; generating a second planar image projected by sampling the first planar image on the basis of a pixel position; and encoding the second planar image.
TRAINING DEVICE, PROCESSING SYSTEM, TRAINING METHOD, PROCESSING METHOD, AND STORAGE MEDIUM
According to one embodiment, a training device is configured to use a first image to generate a second image. A meter is visible in the first image. The meter includes a pointer and a plurality of graduations. The pointer is relatively rotated with respect to the plurality of graduations in the second image. The training device is further configured to use the second image to train a first model that processes a meter image.
Method for playing panoramic picture and apparatus for playing panoramic picture
Provided is a method for playing a panoramic picture, the method comprising: acquiring a corresponding non-planar panoramic picture according to a picture acquisition instruction; acquiring picture content from the non-planar panoramic picture, and determining a main presentation axis and a presentation centre of the picture content; creating a cylindrical projection plane by using the main presentation axis and the presentation centre, wherein a cylinder extension direction of the cylindrical projection plane is substantially perpendicular to the main presentation axis, and the centre of the cylindrical projection plane essentially overlaps the presentation centre; converting the non-planar panoramic picture into a cylindrical panoramic picture with the cylindrical projection plane; and executing a playing operation on the cylindrical panoramic picture by using a planar display apparatus.
HYBRID GRAPHICS AND PIXEL DOMAIN ARCHITECTURE FOR 360 DEGREE VIDEO
In a method and apparatus for processing video data, one or more processors are configured to encode a portion of stored video data in a pixel domain to generate pixel domain video data, a first graphics processing unit is configured to process the video data in a graphics domain to generate graphics domain video data, and an interface transmits the graphics domain video data and the pixel domain video data. One or more processors are configured to parse the video data into a graphics stream and an audio-video stream and decode the video data, a sensor senses movement adaptations of a user, and a second graphics processing unit is configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user.
Conversion and Pre-Processing of Spherical Video for Streaming and Rendering
In one embodiment, a method receives spherical content for video and generates face images from the spherical content to represent an image in the video. A two dimensional sheet for the face images is generated. A size of the face images is reduced and a pixel frame around each of the plurality of face images is added on the sheet. Also, a plurality of gaps are added on the sheet in between edges of the face images that are neighboring. The method then adds gap content in the plurality of gaps where the gap content is based on content in an area proximate to the plurality of gaps. The method encodes the face images, the pixel frame, and gap content on the sheet and sends the encoded sheet to a decoder. The face images are decoded for placement on an object structure to display the spherical content.
Method and system for monitoring and controlling online beverage can color decoration specification
A system is provided in an automated machine vision inspection environment. The system includes inspection cameras and a spectrophotometer or spectrometer, both implemented to be used online to detect the absolute colors of printed portions of items being inspected. The spectrophotometer or spectrometer is aimed at a fixed spot within a field of view of one of the digital cameras of an inspection system, which has a-priori knowledge as to exactly where the spectrophotometer or spectrometer is aimed. The image taken by the camera will be used to determine whether the desired measurement spot on the decoration pattern has actually been measured by the most recent snap of the spectrophotometric data. When the vision system determines that the spectrophotometer or spectrometer was truly aiming at the correct region when it captured its inspection data, it will instruct the system to accept the color measurement and will log the related data and information accordingly. If the correct spot is not measured, the data may simply be discarded or may be kept for other uses.
Equatorial stitching of hemispherical images in a spherical image capture system
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.