Abstract:
Provided is an electronic device including a display, a parallax optical element configured to provide light corresponding to an image output from the display to an eyebox of a user, a temperature sensor configured to measure a temperature around the parallax optical element, a memory configured to store a plurality of parameter calibration models for determining correction information in different temperature ranges for a parameter of the parallax optical element, and a processor configured to determine correction information corresponding to the measured temperature based on a parameter calibration model corresponding to the measured temperature among the plurality of parameter calibration models, and adjust the parameter of the parallax optical element based on the correction information.
Abstract:
An image processing method includes receiving an image frame, detecting a face region of a user in the image frame, aligning a plurality of preset feature points in a plurality of feature portions included in the face region, performing a first check on a result of the aligning based on a first region corresponding to a combination of the feature portions, performing a second check on the result of the aligning based on a second region corresponding to an individual feature portion of the feature portions, redetecting a face region based on a determination of a failure in passing at least one of the first check or the second check, and outputting information on the face region based on a determination of a success in passing the first check and the second check.
Abstract:
An optical layer and a display device including the same, where the optical layer includes optical components slanted a angle θ with respect to a pixel included in a display panel, and disposed at an interval of a pitch l, and the slant angle θ and the pitch l satisfy l=2g×tan(VAL/2) and a=l/(2g×tan(VAP/2)).
Abstract:
A method and apparatus for eye tracking are disclosed. The method may include obtaining feature points corresponding to at least one portion of a face area of the user in an image, determining an inner area of an eye area of a first eye of the user based on the feature points, determining a pupil area of the user based on a pixel value of at least one pixel of the inner area, and determining an eye position of the user based on a position value of each pixel of the pupil area.
Abstract:
An image sensor includes a plurality of non-color pixel sensors each configured to sense a non-color signal; and a color pixel sensing region including at least one color pixel sensor configured to sense a color signal, wherein the color pixel sensing region has an area physically greater than an area of each of the non-color pixel sensors.
Abstract:
An endoscope using depth information and a method for detecting a polyp based on the endoscope using the depth information are provided. The endoscope using the depth information may generate an irradiated light signal including a visible light, obtain depth information based on the irradiated light signal and a reflected light signal obtained through the irradiated light signal being reflected off of an intestine wall, generate a depth image inside the intestine wall based on the depth information, and detect a polyp located on the intestine wall based on the depth image.
Abstract:
An image processing method includes acquiring an image frame; tracking a face region of a user based on first prior information obtained from at least one previous frame of the image frame; based on a determination that tracking of the face region based on the first prior information has failed, setting a scan region in the image frame based on second prior information obtained from the at least one previous frame; and detecting the face region in the image frame based on the scan region.
Abstract:
An electronic device includes a display device configured to output an image, a parallax optical element configured to provide a light corresponding to the image output from the display device, to an eyebox of a user, and a processor configured to adjust a parameter of the parallax optical element based on a change of a look down angle (LDA) between the eyebox and a virtual image plane formed by the display device and the parallax optical element.
Abstract:
An image processing method and an image processing apparatus are provided. The image processing method includes acquiring information of a first region of interest (ROI) in a first frame, estimating information of a second ROI in a second frame that is received after the first frame, based on the acquired information of the first ROI, and sequentially storing, in a memory, subframes that are a portion of the second frame, each of the subframes being a line of the second frame. The image processing method further includes determining whether a portion of the stored subframes includes the second ROI, based on the estimated information of the second ROI, and based on the portion of the stored subframes being determined to include the second ROI, processing the portion of the stored subframes.
Abstract:
An image processing method and an image processing apparatus are provided. The image processing method includes acquiring information of a first region of interest (ROI) in a first frame, estimating information of a second ROI in a second frame that is received after the first frame, based on the acquired information of the first ROI, and sequentially storing, in a memory, subframes that are a portion of the second frame, each of the subframes being a line of the second frame. The image processing method further includes determining whether a portion of the stored subframes includes the second ROI, based on the estimated information of the second ROI, and based on the portion of the stored subframes being determined to include the second ROI, processing the portion of the stored subframes.