Abstract:
Provided are an image processing method and an image processing device. The image processing method includes generating an image based on viewpoint information of a user; rendering the image based on information about what is in front of the user; and outputting the rendered image using an optical element.
Abstract:
Provided is an imaging device including a sensing array including a plurality of sensing elements, an imaging lens array including a plurality of imaging optical lenses, each of the plurality of imaging optical lenses having a non-circular cross-section perpendicular to an optical axis, and configured to transmit light received from an outside of the imaging device, and a condensing lens array including a plurality of condensing lenses disposed between the imaging lens array and the sensing array, and configured to transmit the light passing through the imaging lens array to the sensing elements, wherein a number of the plurality of imaging optical lenses is less than a number of the plurality of condensing lenses.
Abstract:
A content visualizing device and method that may adjust content based on a distance to an object so as to maintain a projection plane and prevent an overlap with the object in front is provided.
Abstract:
An apparatus for calibrating a multiview image may extract feature points from the multiview image and perform image calibration based on the extracted feature points, track corresponding feature points in temporally successive image frames of a first view image, and perform the image calibration based on pairs of corresponding feature points between the feature points tracked from the first view image and feature points of a second view image.
Abstract:
A method of driving a lens array camera may include simultaneously driving a first group of sensing elements from among a plurality of sensing elements, each sensing element from among the first group of sensing elements corresponding to a same original signal viewpoint, wherein the plurality of sensing elements is included in a sensor corresponding to the lens array camera including N rows of N lenses, N being a natural number.
Abstract:
An image sensor and a method of manufacturing the image sensor are provided. The image sensor includes a block layer including an absorption layer and a transparent layer that are alternately stacked, a lens element is located below the block layer, and a sensing element is located to face the lens element.
Abstract:
A multi-lens based capturing apparatus and method are provided. The capturing apparatus includes a lens array including lenses and a sensor including sensing pixels, wherein at least a portion of sensing pixels in the sensor may generate sensing information based on light entering through different lenses in the lens array, and light incident on each sensing pixel, among the portion of the plurality of sensing pixels may correspond to different combinations of viewpoints.
Abstract:
An image processing method and apparatus are provided. The image processing method may include determining whether stereoscopic objects that are included in an image pair and that correspond to each other are aligned on the same horizontal line. The method includes determining whether the image pair includes target objects having different geometric features from those of the stereoscopic objects if the stereoscopic objects are not aligned on the same horizontal line. The method includes performing image processing differently for the stereoscopic objects and for the target objects if the image pair includes the target objects.
Abstract:
An image processing apparatus including a region of interest (ROI) configuration unit may generate a visual attention map according to visual characteristics of a human in relation to an input three dimensional (3D) image. A disparity adjustment unit may adjust disparity information, included in the input 3D image, using the visual attention map. Using the disparity information adjusted result, a 3D image may be generated and displayed which reduces a level of visual fatigue a user may experience in viewing the 3D image.