Abstract:
A method performed by an electronic device, the electronic device, and a storage medium are provided. The method includes obtaining a frame image of a video from a camera and inertia data of an inertial measurement unit (IMU) corresponding to the frame image and obtaining a camera position and pose of the camera, a sparse map, and a high-density map corresponding to the frame image, based on the frame image and the inertia data of the IMU.
Abstract:
A method and device that estimate a pose of a device and the device are disclosed. The method may include generating inertial measurement unit (IMU) data of the device, determining a first pose of the device a first time point based on the IMU data, generating a current predicted motion state array based on the IMU data, and estimating an M-th predicted pose of the device at an M-th time point after the first time point based on the current predicted motion state array, where M denotes a natural number greater than 1.
Abstract:
An image processing apparatus includes a calculator configured to calculate a respective position offset for each of a plurality of candidate areas in a second frame based on a position of a basis image in a first frame and a determiner configured to determine a final selected area that includes a target in the second frame based on a respective weight allocated to each of the plurality of candidate areas and the calculated respective position offset.
Abstract:
A method for reducing a moire fringe includes calculating a moire fringe width for each of different angles between a microlens array and pixels of a display screen. The method includes determining, to be a final inclination angle between the microlens array and the pixels of the display screen, one of the different inclination angles that corresponds to a minimum width among the calculated moire fringe widths.
Abstract:
A method of determining eye position information includes identifying an eye area in a facial image; verifying a two-dimensional (2D) feature in the eye area; and performing a determination operation including, determining a three-dimensional (3D) target model based on the 2D feature; and determining 3D position information based on the 3D target model.
Abstract:
An image processing method and apparatus are provided. The image processing method may include determining whether stereoscopic objects that are included in an image pair and that correspond to each other are aligned on the same horizontal line. The method includes determining whether the image pair includes target objects having different geometric features from those of the stereoscopic objects if the stereoscopic objects are not aligned on the same horizontal line. The method includes performing image processing differently for the stereoscopic objects and for the target objects if the image pair includes the target objects.
Abstract:
A scene flow estimation method includes: inputting a frame pair into an artificial intelligence (AI) network, and obtaining therefrom a motion embedding feature and a non-occluded-category label embedding feature corresponding to a target pixel in the frame pair; and estimating a scene flow corresponding to the frame pair based on the motion embedding feature and the non-occluded-category label embedding feature, the frame pair includes a first frame and a second frame, the first frame including a first color image and a first depth image and the second frame including a second color image and a second depth image, the non-occluded-category label embedding feature includes category information of an object corresponding to a pixel pair in the frame pair, the pixel pair includes a first pixel of the first frame and a second pixel of the second frame, and the second pixel corresponds to the first pixel.
Abstract:
An object pose and model estimation method includes acquiring a global feature of an input image, and a location code of an object including location information for a joint point of the object and location information for a model vertex in a template model; determining a local area feature of the object based on the global feature of the input image and based on the location code of the object in the template model; and acquiring location information for the joint point of the object in the input image and location information for the model vertex in the input image based on the local area feature of the object.
Abstract:
An image processing method and apparatus is disclosed. The image processing method includes receiving an input image and estimating a depth of a target based on a position, a size, and a class of the target in the input image.
Abstract:
A method and apparatus for correcting an image error in a naked-eye three-dimensional (3D) display, the method including controlling a flat-panel display displaying a stripe image, calculating a raster parameter of the naked-eye 3D display based on a captured stripe image, and correcting a stereoscopic image displayed on the naked-eye 3D display based on the calculated raster parameter, wherein the naked-eye 3D display includes the flat-panel display and the raster is disclosed.