Abstract:
Provided are methods and apparatuses for calibrating a three-dimensional (3D) image in a tiled display including a display panel and a plurality of lens arrays. The method includes capturing a plurality of structured light images displayed on the display panel, calibrating a geometric model of the tiled display based on the plurality of structured light images, generating a ray model based on the calibrated geometric model of the tiled display, and rendering an image based on the ray model.
Abstract:
A method and an apparatus for obtaining a depth image by using a time-of-flight sensor may generate a depth image based on a plurality of images in which a motion blur area caused by a movement of an object was corrected. In this case, since the motion blur area is corrected after an initial phase difference of emitted light is compensated, an accuracy of the depth image may be enhanced.
Abstract:
A method of processing a stereoscopic video includes determining whether a current frame of a stereoscopic video is a video segment boundary frame; determining whether an image error is included in the current frame when the current frame is the video segment boundary frame; and processing the current frame by removing, from the current frame, a post inserted object (PIO) included in the current frame when the image error is included in the current frame.
Abstract:
Provided is an apparatus and method for calibrating a multi-layer three-dimensional (3D) display (MLD) that may control a 3D display including a plurality of display layers to display a first image on one of the plurality of display layers, acquire a second image by capturing the first image, calculate a homography between the display layer and an image capturer based on the first image and the second image, and calculate geometric relations of the display layer with respect to the image capturer based on the calculated homography.
Abstract:
A method of processing a stereoscopic video includes determining whether a current frame of a stereoscopic video is a video segment boundary frame; determining whether an image error is included in the current frame when the current frame is the video segment boundary frame; and processing the current frame by removing, from the current frame, a post inserted object (PIO) included in the current frame when the image error is included in the current frame.
Abstract:
A three-dimensional (3D) display apparatus and method are provided. The 3D display apparatus may include a display screen configured to display each of a plurality of sub-images included in a single frame of a 3D image using a time-division multiplexing (TDM), a polarizer configured to polarize each of the displayed sub-images by changing a polarization direction using the TDM, in synchronization with the display screen, and microlens arrays arranged in a plurality of layers and configured to sequentially refract the polarized sub-images, respectively.
Abstract:
A display device may include a plurality of display panels, and light path adjusters disposed on upper portions of the plurality of the display panels. The light path adjusters include a lens array configured to transfer different beams emitted from the plurality of display panels to each eye of a user, and a joint removal structure disposed on one side of the light path adjusters corresponding to a connecting joint that connects the plurality of display panels. The joint removal structure is configured to refract the beams emitted from the plurality of display panels.
Abstract:
A feature point positioning apparatus includes a memory storing computer-executable instructions; and one or more processors configured to execute the computer-executable instructions such that the one or more processors are configured to, iteratively update a first form coefficient based on, a nonlinear feature extracted from an image, and a regression factor matrix obtained through training, and detect a position of the feature point of the image based on, the updated first form coefficient, and a statistical form model obtained through training.
Abstract:
A method of determining eye position information includes identifying an eye area in a facial image; verifying a two-dimensional (2D) feature in the eye area; and performing a determination operation including, determining a three-dimensional (3D) target model based on the 2D feature; and determining 3D position information based on the 3D target model.