Abstract:
A face tracking apparatus includes: a face region detector; a segmentation unit; an occlusion probability calculator; and a tracking unit. The face region detector is configured to detect a face region based on an input image. The segmentation unit is configured to segment the face region into a plurality of sub-regions. The occlusion probability calculator configured to calculate occlusion probabilities for the plurality of sub-regions. The tracking unit is configured to track a face included in the input image based on the occlusion probabilities.
Abstract:
Provided are methods and apparatuses for calibrating a three-dimensional (3D) image in a tiled display including a display panel and a plurality of lens arrays. The method includes capturing a plurality of structured light images displayed on the display panel, calibrating a geometric model of the tiled display based on the plurality of structured light images, generating a ray model based on the calibrated geometric model of the tiled display, and rendering an image based on the ray model.
Abstract:
Provided is an apparatus and method for calibrating a multi-layer three-dimensional (3D) display (MLD) that may control a 3D display including a plurality of display layers to display a first image on one of the plurality of display layers, acquire a second image by capturing the first image, calculate a homography between the display layer and an image capturer based on the first image and the second image, and calculate geometric relations of the display layer with respect to the image capturer based on the calculated homography.
Abstract:
A three-dimensional (3D) image photographing apparatus includes a photographing unit configured to photograph a first photo, and capture an image after the first photo is photographed; a feature extracting unit configured to extract feature points from the first photo and the image, and match the feature points extracted from the first photo to the feature points extracted from the image; a position and gesture estimating unit configured to determine a relationship between a position and a gesture of the 3D image photographing apparatus when the first photo is photographed, and a position and a gesture of the 3D image photographing apparatus when the image is captured, based on the matched feature points, the photographing unit configured to photograph the image as a second photo in response to the relationship satisfying a predetermined condition; and a synthesizing unit configured to synthesize the first and second photos to a 3D image.
Abstract:
A display system, according to one embodiment, comprises a display panel for displaying the EIA, a lens array positioned at the front part of the display panel, a depth camera for generating a depth image by photographing a user. The display system may include an image processor for calculating a viewing distance between the user and the display system from the depth image, generating a plurality of ray clusters corresponding to one view point according to the viewing distance, generating a multi-view image by rendering the plurality of ray clusters, and generating the EIA on the basis of the multi-view image.
Abstract:
A radiographic apparatus may comprise: a radiation irradiating module configured to irradiate radiation to an object; and/or a processing module configured to automatically set a part of a region to which the radiation irradiating module is able to irradiate the radiation, to a region of interest, and further configured to determine at least one of a radiation irradiation position and a radiation irradiation zone of the radiation irradiating module based on the region of interest.
Abstract:
An apparatus for detecting a body part from a user image may include an image acquirer to acquire a depth image, an extractor to extract the user image from a foreground of the acquired depth image, and a body part detector to detect the body part from the user image, using a classifier trained based on at least one of a single-user image sample and a multi-user image sample. The single-user image may be an image representing non-overlapping users, and the multi-user image may be an image representing overlapping users.
Abstract:
A three-dimensional (3D) display method includes generating N first visual images, N being a natural number greater than 1; generating M second visual images from each of the N first visual images, M being a natural number greater than 1; acquiring N visual image groups corresponding to the N first visual images, respectively, such that, for each one of the N visual image groups, the visual image group includes the M second visual images generated from the first visual image, from among the N first visual images, to which the visual image group corresponds; generating M elemental image array (EIA) images based on the N visual image groups; and time-share displaying the M EIA images.
Abstract:
Provided are an X-ray imaging apparatus that is capable of tracking a position of an object of interest using a Kalman filter so as to reduce the amount of X-ray radiation exposure of a subject, calculating covariance indicative of accuracy of the tracing, and controlling a collimator so that the position of the object of interest and calculated covariance may be correlated with a position and an area of a region into which X-rays are radiated, and a method of controlling the X-ray imaging apparatus.
Abstract:
An apparatus and method for tracking a gaze based on a camera array are provided. The apparatus may include a camera array including a plurality of cameras, and a plurality of light sources for the plurality of cameras, a light source controller to control the plurality of light sources so that the plurality of cameras capture a bright pupil image and a dark pupil image of a user, a detector to detect a position of a pupil center of the user, and a position of a glint caused by reflection of the plurality of light sources from the captured bright pupil image and the captured dark pupil image, and an estimator to estimate an interest position of eyes of the user by tracking a gaze direction of the eyes, based on the detected position of the pupil center and the detected position of the glint.