Abstract:
Disclosed is a method and apparatus for calibrating parameters of a three-dimensional (3D) display apparatus, the method including acquiring a first captured image of a 3D display apparatus displaying a first pattern image, adjusting a first parameter set of the 3D display apparatus based on the first captured image, acquiring a second captured image of the 3D display apparatus displaying a second pattern image based on the adjusted first parameter set, and adjusting a second parameter set of the 3D display apparatus based on the second captured image.
Abstract:
A method for generating a three-dimensional (3D) image may detect a current eye position of a user and render a 3D image based on at least one of a previously detected eye position of the user and previously generated stereo images. A cycle at which the current eye position of user is detected and a cycle at which a 3D image is rendered may be asynchronous.
Abstract:
An operating method of a display apparatus includes calculating a range of a movement of a user based on eye movement information indicating movements of eyes of the user; and adjusting a stereoscopic depth of a three-dimensional (3D) image based on the range of the movement.
Abstract:
A three-dimensional (3D) image rendering method and an apparatus are provided. The 3D image rendering method includes determining optical images associated with candidate viewpoint positions in a viewing zone, determining virtual rays intersecting a pixel of a display panel based on the determined optical images, and assigning a pixel value to the pixel based on respective distances between intersection points between the rays and an optical layer and optical elements of the optical layer.
Abstract:
An image processing method includes: based on an imaging device generating an input array image of a multi-view through an array lens, determining a first setting parameter indicating whether to use the multi-view or a single view selected from the multi-view; determining a second setting parameter indicating whether a priority is assigned to sensitivity enhancement based on pixel binning or to resolution enhancement based on pixel interpolation; determining an image processing mode based on the first setting parameter and the second setting parameter; and generating an output image by performing image processing on the input array image based on a processing procedure that is based on the determined image processing mode.
Abstract:
An electronic device may include: a camera module comprising a wide angle camera configured to generate a wide image and a telephoto camera configured to generate teleimages corresponding to grid cells of the wide image; and one or more processors configured to: partition the teleimages into partial teleimages; determine partial wide images corresponding to the partial teleimages based on subcells of the grid cells and the wide image; perform feature matching between the partial teleimages and the partial wide images; determine a warping parameter of the teleimages based on a result of the feature matching; and generate a synthetic image corresponding to the wide image by warping the teleimages based on the warping parameter.
Abstract:
A method and apparatus for outputting a three-dimensional (3D) image are provided. To output a 3D image, a stereo image is generated based on viewpoints of a user and rendered into a 3D image. Since the stereo image is generated based on the viewpoints of the user, the user views a different side of an object appearing in the 3D image depending on a viewpoint of the user.
Abstract:
A method and apparatus for determining an interpupillary distance (IPD) are provided. To determine an IPD of a user, three-dimensional (3D) images for candidate IPDs may be generated, and user feedback on the 3D images may be received. A final IPD may be determined based on the user feedback.
Abstract:
An apparatus for recognizing a pupillary distance for three-dimensional (3D) display includes a display configured to output a 3D image corresponding to a reference pupillary distance, a controller configured to control a viewing cone included in the 3D image, and a user inputter configured to receive a user feedback indicating whether an artifact is viewed in the 3D image in response to the controlling of the viewing cone. The controller may move the viewing cone within a margin corresponding to the reference pupillary distance, and change the reference pupillary distance or determine the reference pupillary distance to be a desired pupillary distance of the user based on the user feedback.
Abstract:
A method of determining a calibration parameter for a three-dimensional (3D) display device includes determining a calibration parameter for a 3D display device based on an image of a second pattern three-dimensionally converted from an image of a first pattern.