Abstract:
A processor-implemented method includes: extracting pyramid level color feature maps from two or more images; extracting pyramid level density feature maps based on a cost volume generated based on the color feature maps; generating neural scene representation (NSR) cube information representing a three-dimensional (3D) space based on the color feature maps and the density feature maps; and generating a two-dimensional (2D) scene of a field of view (FOV) different from a FOV of the two or more images based on the NSR cube information.
Abstract:
An apparatus including a processor configured to execute a plurality of instructions and a memory storing the plurality of instructions, wherein execution of the plurality of instructions configures the processor to be configured to encode a dynamic event including a plurality of first images received from a camera, receive a selection of a target camera position and a target light quantity in response to a user input from among the plurality of first images, and generate a target image by performing decoding based on the target camera position and the target light quantity.
Abstract:
An image processing apparatus includes a processor configured to calculate a curvature value of a first point in stereo images based on a disparity value corresponding to the first point, and refine the disparity value based on the curvature value.
Abstract:
A fluid particle modeling method includes determining bubble cells based on locations of fluid particles, defining a bubble based on the bubble cells, calculating a pressure of the bubble based on a change in a volume of the bubble, and updating the locations of the fluid particles based on the pressure of the bubble.
Abstract:
A modeling method based on particles, the method including generating coarse particles by down-sampling target particles corresponding to at least a portion of a target object, calculating a correcting value enabling the coarse particles to satisfy constraints of the target object based on physical attributes of the target particles, applying the correcting value to the target particles, and redefining the target particles in response to the target particles to which the correcting value is applied satisfying the constraints.
Abstract:
Provided is a method of modeling a target object, that may obtain depth information from an image in which the target object is represented in a form of particles, obtain distance information between adjacent particles in the image, and detect a silhouette of the target object based on the depth information and the distance information between the adjacent particles in the image.
Abstract:
Provided is a method and apparatus for modeling objects that may include detecting an adjacent area that shares modeled particles of a first object and modeled particles of a second object, calculating an action force between the first object and the second object in the adjacent area based on information stored for grid points of a grid defined with respect to the adjacent area, and modelling the first object and the second object based on the calculated action force.
Abstract:
A method of determining an illumination pattern includes constructing a dataset by estimating a first surface normal vector of a three-dimensional (3D) object from a first image obtained by capturing the 3D object of which surface normal information is known, the dataset including basis images of the 3D object; generating simulation images in which virtual illumination patterns, obtained based on a combination of the basis images, are applied to the 3D object; estimating a second surface normal vector of the 3D object, by reconstructing a surface normal using a photometric stereo technique based on the virtual illumination patterns and simulation images corresponding to the virtual illumination patterns; and training a neural network to determine an illumination pattern based on a difference between the first surface normal vector and the second surface normal vector.
Abstract:
Disclosed are a method and device for representing rendered scenes. A data processing method of training a neural network model includes obtaining spatial information of sampling data, obtaining one or more volume-rendering parameters by inputting the spatial information of the sampling data to the neural network model, obtaining a regularization term based on a distribution of the volume-rendering parameters, performing volume rendering based on the volume-rendering parameters, and training the neural network model to minimize a loss function determined based on the regularization term and based on a difference between a ground truth image and an image that is estimated according to the volume rendering.
Abstract:
A method with object detection includes: obtaining a first point cloud feature based on point cloud data of an image; and determining at least one object in the image based on the first point cloud feature.