Abstract:
Provided is a method and apparatus for aligning a three-dimensional (3D) model. The 3D model alignment method includes acquiring, by a processor, at least one two-dimensional (2D) image including an object, detecting, by the processor, a feature point of the object in the at least one 2D input image using a neural network, estimating, by the processor, a 3D pose of the object in the at least one 2D input image using the neural network, retrieving, by the processor, a target 3D model based on the estimated 3D pose, and aligning, by the processor, the target 3D model and the object based on the feature point.
Abstract:
An image processing apparatus and method are provided. The image processing method may generate a mask for preventing a virtual light source from being sampled on an area of a current image frame based on virtual light source information of a previous image frame, applying the mask to the current image frame, sampling the virtual light source in the current image frame to which the mask is applied, and rendering the current image frame based on the virtual light source sampled in the current image frame.
Abstract:
A method of modeling an object includes defining a state transition probability and a state of each of a plurality of particles forming the object; changing a state of a particle defined to be in a first state among the plurality of particles to a second state; applying a movement model to a particle defined to be in the second state among the plurality of particles; and changing a state of the particle defined to be in the second state to the first state based on the state transition probability.
Abstract:
A shadow information storing method and apparatus is disclosed. The shadow information storing apparatus determines a shadow area through rendering a three-dimensional (3D) model based on light radiated from a reference virtual light source, determines a shadow feature value of a vertex of the 3D model based on a distance between a location of the vertex of the 3D model and the shadow area, and stores the determined shadow feature value.
Abstract:
An image processing apparatus includes a calculator configured to calculate a first difference value between frames in terms of either one or both of a position and a direction of a direct light by comparing a current frame to at least one previous frame, and a determiner configured to determine that an indirect light of the current frame is to be sampled in response to the first difference value being greater than or equal to a threshold.
Abstract:
An apparatus for estimating a camera pose includes an image acquisition unit to acquire a photographed image, a motion sensor to acquire motion information of the apparatus for estimating the camera pose, a static area detector to detect a static area of the photographed image based on the photographed image and the motion information, and a pose estimator to estimate a camera pose based on the detected static area.
Abstract:
A sampler of an image processing apparatus may sample at least one first virtual point light (VPL) from a direct light view. The sampler may sample a second VPL in a three-dimensional (3D) space independent of the direct light view. A calculator may calculate a luminance of the second VPL using a first VPL adjacent to the second VPL selected from among the at least one first VPL.
Abstract:
A method of creating a model of an organ, includes creating a shape model, including a blood vessel structure, of the organ based on three-dimensional (3D) images of the organ, and compartmentalizing the shape model into areas based on an influence of a blood vessel tree with respect to a deformation of the shape model, the blood vessel tree indicating the blood vessel structure. The method further includes deforming the blood vessel structure of the shape model to fit a blood vessel structure of a two-dimensional (2D) image of the organ, and creating the model of the organ based on the deformed blood vessel structure and the areas.
Abstract:
A method, implemented by a processor, of correcting lighting of an image includes inputting an input image to a first neural network and generating predicted lighting data corresponding to lighting of the input image and embedding data corresponding to a feature of the input image, inputting the generated predicted lighting data, the generated embedding data, and sensor data to a second neural network and generating a lighting weight corresponding to the input image, and generating correction lighting data for the input image by applying the generated lighting weight to preset basis lighting data corresponding to the input image.
Abstract:
A method with image augmentation includes recognizing, based on a gaze of the user corresponding to the input image, any one or any combination of any two or more of an object of interest of a user, a situation of the object of interest, and a task of the user from partial regions of an input image determining relevant information indicating an intention of the user, based on any two or any other combination of the object of interest of the user, the situation of the object of interest, and the task of the user, and generating a visually augmented image by visually augmenting the input image based on the relevant information.