Abstract:
An electronic device includes a processor configured to perform operations including inputting an image into an encoder to generate a feature map including information about an illumination present in the input image, iteratively updating a plurality of slot vectors using the calculated feature map to calculate a plurality of predicted illumination vectors, calculating, using the calculated plurality of predicted illumination vectors, a plurality of mixture maps representing respective effects of a plurality of virtual illuminations on pixels in the input image and a plurality of illumination color vectors representing respective color values of the plurality of virtual illuminations, and generating an illumination map using the calculated plurality of mixture maps and the calculated plurality of illumination color vectors.
Abstract:
A neural network-based image processing method and apparatus are provided. A method includes receiving input image data comprising original image data, when a current image processing mode is an independent processing mode, generating first intermediate image data by executing a main neural network based on the original image data and by not executing an auxiliary neural network based on the original image data, when the current image processing mode is in a cooperative processing mode, generating second intermediate image data by determining an auxiliary parameter by executing the auxiliary neural network based on the original image data and by executing the main neural network based on the original image data and based on the auxiliary parameter, and generating output image data by operating an image signal processing (ISP) block based the first intermediate image data or the second intermediate image data according to the current image processing mode.
Abstract:
A device with image acquisition includes: a first phase mask disposed at a front end of a display layer and configured to modulate external light; the display layer comprising pixel areas between hole areas through which the modulated light that has passed through the first phase mask passes; a second phase mask disposed at a rear end of the display layer and configured to modulate the modulated light that has passed through the first phase mask; an image sensor disposed at a rear end of the second phase mask and configured to generate a raw image by sensing the modulated light that has passed through the second phase mask; and a processor configured to perform image processing on the raw image, based on blur information corresponding to the raw image.
Abstract:
A pose estimation method and apparatus is disclosed. The pose estimation method includes acquiring a raw image before a geometric correction from an image sensor, determining a feature point in the raw image, and estimating a pose based on the feature point
Abstract:
A method and apparatus for predicting an intention acquires a gaze sequence of a user, acquires an input image corresponding to the gaze sequence, generates a coded image by visually encoding temporal information included in the gaze sequence to the input image, and predicts an intention of the user corresponding to the gaze sequence based on the input image and the coded image.
Abstract:
An image processing apparatus includes a first processor configured to obtain, from a color image, an illumination element image and an albedo element image corresponding to the color image, and a second processor configured to divide the illumination element image into a plurality of subelement images each corresponding to the color image.
Abstract:
A three-dimensional (3D) rendering method and apparatus is disclosed. The 3D rendering apparatus may determine a select shading point in a 3D scene on which shading is to be performed, perform the shading on the determined shading point, and determine shading information of the 3D scene based on a result of the shading performed on the determined shading point.
Abstract:
A shadow information storing method and apparatus is disclosed. The shadow information storing apparatus determines a shadow area through rendering a three-dimensional (3D) model based on light radiated from a reference virtual light source, determines a shadow feature value of a vertex of the 3D model based on a distance between a location of the vertex of the 3D model and the shadow area, and stores the determined shadow feature value.
Abstract:
Provided is a method and apparatus for modeling and restoring a target object, the method including generating a key structure of the target object based on deformation information of the target object extracted from a pre-modeling result based on a shape and a material property of the target object, and calculating a virtual material property corresponding to the key structure based on the virtual material property and the key structure of the target object.
Abstract:
An image processing apparatus includes a first shader configured to perform a light shading operation associated with at least one light source on a three-dimensional (3D) model at a first resolution to obtain a light shading result of the first resolution; a second shader configured to perform a surface shading operation on the 3D model at a second resolution different from the first resolution to obtain a surface shading result of the second resolution; and a processor configured to generate a rendering result by combining the light shading result of the first resolution with the surface shading result of the second resolution.