Abstract:
A master device providing an image to a slave device providing a virtual reality service is provided. The master device includes: a content input configured to receive an input stereoscopic image; a communicator configured to perform communication with the slave device providing the virtual reality service; and a processor configured to determine a viewpoint region corresponding to a motion state of the corresponding slave device in the input stereoscopic image on the basis of motion information received from the slave device and control the communicator to transmit an image of the identified viewpoint region to the slave device.
Abstract:
Provided is a method of decoding a multiview video, the method involving receiving multiview image streams that configure the multiview video, obtaining picture order count (POC) information of a base-view picture from a predetermined data unit header that includes information of the base-view picture included in a base-view image stream, determining a POC of the base-view picture by using the POC information of the base-view picture, based on an instantaneous decoding refresh (IDR) picture of a base-view, and determining, by using the POC of the base-view picture, a POC of an additional-view picture that is included in a same access unit as the base-view picture and is transmitted.
Abstract:
An image processing method and apparatus and an image generating method and apparatus, the image processing method to output a video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data.
Abstract:
An image processing method and apparatus and an image generating method and apparatus, the image processing method to output a video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data.
Abstract:
An image processing method and an image processing device which can improve sharpness by producing a binocular rivalry intentionally are provided. An image processing device 100 includes a right eye image acquiring unit 101 which generates a right eye image by performing a correction processing on an input image, a left eye image acquiring unit 102 which generates a left eye image which produces a binocular rivalry with the right eye image by performing a correction processing which is different from the correction processing, and a multi-eye display unit which displays the right eye image and the left eye images to different viewpoints.
Abstract:
A method for reducing visual ghost artifacts in plano-stereoscopic image transmissions, wherein the plano-stereoscopic image transmissions originate from or pass through a device that provides ghost artifact compensation, comprising receiving left-eye image data for a non-linear image representation of a left-eye image, receiving right-eye image data for a non-linear image representation of a right-eye image, transforming the left-eye and right-eye image data to determine linear left-eye and right-eye image representations, respectively, applying at least one ghosting coefficient to calculate a left-eye or right-eye ghost contribution from the linear left-eye or right-eye image representation, respectively; and subtracting the left-eye or right-eye ghost contribution from the linear right-eye or left-eye image representation, respectively, to determine respective compensated linear right-eye or left-eye image data.
Abstract:
A display apparatus includes a display unit (7) that is capable of displaying independent pictures for a plurality of viewing directions on a single screen based on a video signal, a conversion processing unit (220) that generates, through a conversion process, a plurality of new pixel data based on a plurality of original pixel data that constitute a picture source signal, and an extraction processing unit (200) that extracts a predetermined number of pixel data for generating the video signal from the new pixel data that are conversion-processed by the conversion processing unit. The conversion processing unit (200) generates the new pixel data based on arbitrary original pixel data and at least adjacent original pixel data that is adjacent to the arbitrary original pixel data, considering an extraction of the pixel data by the extraction processing unit (220).
Abstract:
In the technique of displaying to a viewer an image stereoscopically representing a three-dimensional object to be displayed, the depths of the to-be-displayed object are efficiently represented using a stereoscopic image. To this end, in an image display device 100 of a retinal scanning type, a wavefront modulating device 78 modulates the curvature of wavefront of a light beam for display of a to-be-displayed image, per each partition of the image (e.g., per each pixel). As a result, the depths of the three-dimensional to-be-displayed object are represented. A depth signal used for representing the depths is a signal corresponding to depth data which was used in performing the hidden surface elimination by a Z-buffer technique, for polygon data three-dimensionally and geometrically representative of the to-be-displayed object, the hidden surface elimination being performed in a rendering unit 34 of a signal processing device 12.
Abstract:
A three-dimensional (3-D) machine-vision safety-solution involving a method and apparatus for performing high-integrity, high efficiency machine vision. The machine vision safety solution converts two-dimensional video pixel data into 3-D point data that is used for characterization of specific 3-D objects, their orientation, and other object characteristics for any object, to provide a video safety "curtain." A (3-D) machine-vision safety-solution apparatus includes an image acquisition device arranged to view a target scene stereoscopically and pass the resulting multiple video output signals to a computer for further processing. The multiple video output signals are connected to the input of a video processor adapted to accept the video signals. Video images from each camera are then synchronously sampled, captured, and stored in a memory associated with a general purpose processor. The digitized image in the form of pixel information can then be stored, manipulated and otherwise processed in accordance with capabilities of the vision system. The machine vision safety solution method and apparatus involves two phases of operation: training and run-time.
Abstract:
The system and the method are intended to enable robots, devices, tools, etc., to see the environment in which they operate, by means of a pair of identical cameras, aligned, compatible and coordinated with their fields of sight (photographing) being parallel and adju sted in parallel divergence (M) between each and every line in the fields of sight, and to identify anything viewed by the cameras immediately and at the rate of photographing/filming. By means of the system and the method, the computer vision receives the pictures from the cameras into a designated place for this purpose and a backup is created for the field of sight from the pictures stored in the spatial-memory, and it calculates the distance for each and every point in the picture, as well as the dimensions of the shapes and registers their various features. The system includes a memory register for movement identification at time intervals and calculation of movement, motion, speed and direction of each and every shape in the pictures of the leading camera, received after color filtering, a register of basic shapes and data table(s) such as the "true" table. Said registers are meant for purpose of detecting data, features, definitions and for drawing of conclusions for the key elements. The data, the features, definitions and conclusions enable to compose keys for the unidentified shapes, compatible with the recognized stored shapes register, and thus identification of the unidentified shapes is carried out fully and almost immediately.