Abstract:
An image processing method includes acquiring an image frame; tracking a face region of a user based on first prior information obtained from at least one previous frame of the image frame; based on a determination that tracking of the face region based on the first prior information has failed, setting a scan region in the image frame based on second prior information obtained from the at least one previous frame; and detecting the face region in the image frame based on the scan region.
Abstract:
Provided are a three-dimensional (3D) rendering method and apparatus that detect eye coordinates of positions of eyes of a user from an image of the user, adjust the eye coordinates to correspond to virtual eye positions that reduce crosstalk caused by refraction of light; and perform 3D rendering of the eyes based on the adjusted eye coordinates.
Abstract:
A method and apparatus for array image processing are provided. The method includes receiving sub images corresponding to different views of an input array image generated through an array lens, generating temporary restored images based on the sub images using a gradient between neighboring pixels of each of the sub images, determining matching information based on a view difference between pixels of the sub images using a neural network model, based on a pixel distance between matching pairs of the pixels of the sub images using the matching information, extracting refinement targets from the matching pairs, refining the matching information by replacing at least some of target pixels included in the refinement targets based on a local search of a region based on pixel locations of the refinement targets, and generating an output image of a single view by merging the temporary restored images based on the refined matching information.
Abstract:
A method with image processing includes: receiving an input image including Bayer images captured by a plurality of lenses included in a lens assembly; generating channel separation images by separating each of the Bayer images by a plurality of channels; determining corresponding points such that pixels in the channel separation images are displayed at the same position on a projection plane, for each of the plurality of lenses; performing binning on the channel separation images, based on a brightness difference and a distance difference between a target corresponding point and a center of a pixel including the target corresponding point, corresponding to each of the corresponding points in channel separation images that correspond to a same channel and that are combined into one image, for each of the plurality of lenses; restoring the input image for each of the plurality of lenses based on binned images generated by performing the binning; and outputting the restored input image.
Abstract:
An optical layer may include a barrier. The barrier may include slits arranged in the barrier so that vertically neighboring slits from among the slits are connected to each other. The slits are configured to transmit light through the barrier.
Abstract:
A depth noise filtering method and apparatus is provided. The depth noise filtering method may perform spatial filtering or temporal filtering according to depth information. In order to perform spatial filtering, the depth noise filtering method may determine a characteristic of a spatial filter based on depth information. Also, in order to perform temporal filtering, the depth noise filtering method may determine a number of reference frames based on depth information. The depth noise filtering method may adaptively remove depth noise according to depth information and thereby enhance a noise filtering performance.
Abstract:
An image processing apparatus is provided. The image processing apparatus determines whether a first charge quantity of charges stored in a first charge storage is greater than or equal to a predetermined saturation level, the first charge storage among a plurality of charge storages configured to store charges generated by a sensor of a depth camera. According to the determination result, when the first charge quantity is greater than or equal to the saturation level, the image processing apparatus may calculate the first charge quantity from at least one second charge quantity of charges stored in at least one second charge storage which is different from the first charge storage among the plurality of charge storages.
Abstract:
Provided is an electronic device including a display to output an image, a parallax optical element configured to provide light corresponding to the image to a plurality of viewpoints, an input interface configured to receive an input to calibrate the parallax optical element by a user who observes a pattern image from a reference viewpoint among the plurality of viewpoints, and a processor configured to output the pattern image generated by rendering a calibration pattern toward the reference viewpoint, adjust at least one of a pitch parameter, a slanted angle parameter, and a position offset parameter of the parallax optical element based on the input, and output, by the display, the pattern image adjusted by re-rendering the calibration pattern based on an adjusted parameter.
Abstract:
An image processing method includes receiving an image frame, detecting a face region of a user in the image frame, aligning a plurality of preset feature points in a plurality of feature portions included in the face region, performing a first check on a result of the aligning based on a first region corresponding to a combination of the feature portions, performing a second check on the result of the aligning based on a second region corresponding to an individual feature portion of the feature portions, redetecting a face region based on a determination of a failure in passing at least one of the first check or the second check, and outputting information on the face region based on a determination of a success in passing the first check and the second check.
Abstract:
A method and apparatus for measuring a dynamic crosstalk are provided. The method may include: controlling a driver configured to cause a camera to have a dynamic movement; at either one or both of a left eye position and a right eye position of a user, capturing a stereo pattern image output through a three-dimensional (3D) display, by the camera while the camera is in the dynamic movement; and measuring the dynamic crosstalk occurring by the 3D display based on the stereo pattern image captured by the camera.