Abstract:
A processor-implemented method includes obtaining an input image, predicting light source color information of a scene corresponding to the input image and a panoramic image corresponding to the input image using an image processing model, and generating a rendered image by rendering the input image based on either one or both of the light source color information and the panoramic image.
Abstract:
An apparatus and method with depth estimation are disclosed. The method includes calculating a first reliability of each of a plurality of time of flight (ToF) pixels of a ToF image; and generating, based on the first reliabilities, a depth map of a scene based on a left image and a right image and selectively based on the ToF image.
Abstract:
An electronic device and method with gaze estimating are disclosed. The method includes obtaining target information of an image, the image including an eye, obtaining a target feature map representing information on the eye in the image based on the target information, and estimating a gaze for the eye in the image based on the target feature map. The target information includes either attention information on the image, or a distance between pixels in the image, or both.
Abstract:
A method with object pose estimation includes: obtaining an instance segmentation image and a normalized object coordinate space (NOCS) map by processing an input single-frame image using a deep neural network (DNN); obtaining a two-dimensional and three-dimensional (2D-3D) mapping relationship based on the instance segmentation image and the NOCS map; and determining a pose of an object instance in the input single-frame image based on the 2D-3D mapping relationship.
Abstract:
An apparatus for recommending a customer item identifies a purchase tendency of a customer based on an image, determines a recommended item for the customer by selecting a purchase tendency model corresponding to the purchase tendency, and provides information associated with the determined recommended item.
Abstract:
A virtual reality display apparatus and display method thereof are provided. The display method includes displaying a virtual reality image; acquiring object information regarding a real-world object based on a binocular view of the user; and displaying the acquired object information together with the virtual reality image.
Abstract:
An electronic device may operate a plurality of light sources, where each light source operates according to a light source code of a light source code set, each light source code being unique with respect to each other light source code, capture a glint signal corresponding to light emitted from the plurality of light sources through an event camera, obtain glint information from event data from the event camera, estimate a corneal sphere center position and an eye rotation center position based on the glint information, and determine three-dimensional (3D) gaze-related information based on the corneal sphere center position and the eye rotation center position.
Abstract:
A processor-implemented method includes obtaining a first motion matrix corresponding to an extended reality (XR) system and a second motion matrix based on a conversion coefficient from an XR system coordinate system into a rolling shutter (RS) camera coordinate system, and projecting an RS color image of a current frame onto a global shutter (GS) color image coordinate system based on the second motion matrix and generating a GS color image of the current frame, wherein the second motion matrix is a motion matrix of a timestamp of a depth image captured by a GS camera corresponding to a timestamp of a first scanline of an RS color image captured by the GS camera.
Abstract:
An apparatus and method of estimating an object posture are provided. A method includes determining key point information in an image to be processed, determining modified key point information based on a key point feature map corresponding to the key point information, and estimating an object posture of an object in the image to be processed, based on the modified key point information.
Abstract:
An image processing apparatus and method are provided. The image processing apparatus acquires a target image including a depth image of a scene, determines three-dimensional (3D) point cloud data corresponding to the depth image based on the depth image, and extracts an object included in the scene to acquire an object extraction result based on the 3D point cloud data.