Abstract:
A method and corresponding apparatus to model objects includes detecting an overlapping area between first and second objects each comprising particles. The method and corresponding apparatus also calculate, in the overlapping area, an action force between the first and the second objects. The method and corresponding apparatus model the first object and the second object based on the action force.
Abstract:
A rendering method and rendering device are provided. The rendering method includes obtaining a target image corresponding to a target view by inputting parameter information corresponding to the target view to a neural scene representation (NSR) model, determining an adjacent view that satisfies a predetermined condition with respect to the target view, obtaining an adjacent image corresponding to the adjacent view by inputting parameter information corresponding to the adjacent view to the NSR model, and obtaining a final image by correcting the target image based on the adjacent image.
Abstract:
An electronic device includes: one or more processors configured to: extract, using an implicit neural representation (INR) model, a global geometry feature and information indicating whether a point is on a surface from a viewpoint and a view direction corresponding to an image pixel corresponding to a two-dimensional (2D) scene at the viewpoint within a field of view (FOV); determine an object surface position corresponding to the viewpoint and the view direction and normal information of the object surface position based on the information indicating whether the point is on the surface; estimate, using an albedo estimation model, albedo information independent of the view direction from the global geometry feature, the object surface position, and the normal information; and estimate, using a specular estimation model, specular information dependent on the view direction from the global geometry feature, the object surface position, the normal information, and the view direction.
Abstract:
A device including a processor configured to generate, for each of plural query inputs, point information using factors individually extracted from a plurality of pieces of factor data for a corresponding query input and generate pixel information of a pixel position using the point information of points, the plural query inputs being of the points, in a 3D space, on a view direction from a viewpoint toward a pixel position of a two-dimensional (2D) scene.
Abstract:
A method and apparatus for neural rendering based on view augmentation are provided. A method of training a neural scene representation (NSR) model includes: receiving original training images of a target scene, the original training images respectively corresponding to base views of the target scene; generating augmented images of the target scene by warping the original training images, the augmented images respectively corresponding to new views of the target scene; performing background-foreground segmentation on the original training images and the augmented images to generate segmentation masks; and training a neural scene representation (NSR) model to be configured for volume rendering of the target scene by using the original training images, the augmented images, and the segmentation masks.
Abstract:
An electronic device is provided. The electronic device includes an ultrasonic sensor, an electromagnetic wave sensor, a memory for storing at least one instruction, and a processor electronically connected to the memory, wherein the processor is configured to control an ultrasonic sensor to emit ultrasonic waves in a direction of clothing, based on the ultrasonic waves, which are reflected by the clothing, are received through the ultrasonic sensor, acquire sound information based on the received ultrasonic waves, control the electromagnetic wave sensor to emit the electromagnetic waves in the direction of the clothing, based on the ultrasonic waves, which are reflected by the clothing, are received through the ultrasonic sensor, acquire spectrum information based on the received electromagnetic waves, and input the sound information and the spectrum information in a neural network model to acquire contamination level information about the clothing.
Abstract:
An apparatus that transfers object motion in a source space to a target space is provided. The apparatus defines a mapping function from the source space to the target space based on feature points of the object-positioned source space, and feature points of the object-represented target space; determines a target root position corresponding to a root position of the object based on the mapping function; determines a target direction corresponding to a direction of the object, based on the mapping function; determines a target main joint corresponding to a main joint of the object based on the mapping function; determines a target sub-joint excluding the target main joint in the target space based on unique joint information of the object; and generates data representing the object motion in the target space by modifying a pose of the object in the target space to match the target main joint.
Abstract:
A method with global localization includes: extracting a feature by applying an input image to a first network; estimating a coordinate map corresponding to the input image by applying the extracted feature to a second network; and estimating a pose corresponding to the input image based on the estimated coordinate map, wherein either one or both of the first network and the second network is trained based on either one or both of: a first generative adversarial network (GAN) loss determined based on a first feature extracted by the first network based on a synthetic image determined by three-dimensional (3D) map data and a second feature extracted by the first network based on a real image; and a second GAN loss determined based on a first coordinate map estimated by the second network based on the first feature and a second coordinate map estimated by the second network based on the second feature.
Abstract:
A method with object pose estimation includes: obtaining an instance segmentation image and a normalized object coordinate space (NOCS) map by processing an input single-frame image using a deep neural network (DNN); obtaining a two-dimensional and three-dimensional (2D-3D) mapping relationship based on the instance segmentation image and the NOCS map; and determining a pose of an object instance in the input single-frame image based on the 2D-3D mapping relationship.
Abstract:
Disclosed is a method and apparatus for detecting a road line includes segmenting a driving image data into a plurality of segmentation areas, determining a candidate vanishing-point area corresponding to a segmentation area of the segmentation areas, extracting at least one straight road line from the segmentation area, detecting a partial line corresponding to the segmentation area based on whether the at least one straight road line meets the candidate vanishing-point area, detecting the road line of the driving image data by connecting partial lines corresponding to the segmentation areas, and indicating the detected road line.