Abstract:
A method, an apparatus, an electronic device for estimating a pose of an object include determining a confidence of a depth image of an object based on a color image and the depth image of the object, estimating a pose of the object based on a three-dimensional (3D) keypoint in response to the depth image being reliable, and estimating the pose of the object based on a two-dimensional (2D) keypoint in response to the depth image being unreliable.
Abstract:
An electronic device is disclosed, including a display, a camera, and at least one processor. The processor implements the method, including displaying a first preview image acquired through the camera on the display, identifying a category for each object included in the first preview image, applying adjustment filters to each object, each adjustment filter selected based on the identified category, display a second preview image on the display, in which each object is visually altered by the applied adjustment filters, displaying on the second preview image a plurality of selectable icons each corresponding to one of the identified categories, in response to receiving a first input selecting a first selectable icon, removing application of a first adjustment filter from a first object belonging to a category corresponding to the first selectable icon.
Abstract:
Provided is a method and apparatus for rendering a target fluid that include defining level-set information of fluid particles configuring a modeled target fluid. The level-set information comprises a shortest distance from a surface of the target fluid to the fluid particles. The method and apparatus are include discarding internal particles of the target fluid from the modeled target fluid based on the level-set information of the fluid particles, and calculating thickness information of the target fluid, from which the internal particles are discarded, based on depth information on the fluid particles. The target fluid is rendered based on the thickness information of the target fluid.
Abstract:
A method of displaying caustics, the method includes determining intersection positions at which rays emitted from a light source pass through particles of a first object and meet a second object; applying caustic textures to the intersection positions; and rendering the first object using a caustic map based on a result of the applying caustic textures to the intersection positions.
Abstract:
Provided is a method and corresponding apparatus to model a deformable body. The method includes obtaining a material property corresponding to an internal skeletal structure of a deformable body. The method calculates a displacement amount of the skeletal structure according to a motion of the deformable body, based on a boundary condition of the skeletal structure, and the material property. The method further calculates displacement amounts of surface particles on a surface of the deformable body, based on the displacement amount of the skeletal structure, and models the deformable body based on the calculated displacement amounts of the surface particles.
Abstract:
An image processing method and an image processing apparatus are provided. The image processing method includes: receiving a camera pose of a camera corresponding to a target scene; generating a piece of prediction information including either a color of an object included in the target scene or a density of the object, wherein the prediction information is generated by applying, to a neural network model, three-dimensional (3D) points on a camera ray formed based on the camera pose; sampling, among the 3D points, target points corresponding to a static object, wherein the sampling is based on the piece of prediction information; and outputting a rendered image corresponding to the target scene by projecting a pixel value corresponding to the target points onto the target scene and rendering the target scene onto which the pixel value may be projected.
Abstract:
An electronic device, from point information and time information, extracts a plurality of pieces of feature data from a plurality of feature extraction models, obtains spacetime feature data based on interpolation of the pieces of feature data, and generate scene information on the target point at the target time instant from the spacetime feature data and a view direction based on the scene information estimation model.
Abstract:
Various embodiments of the present invention relate to an electronic device and a recording method thereof. The electronic device may comprise: a touch screen; an image sensor for capturing an image; a processor operatively connected to the touch screen and the image sensor; and a memory operatively connected to the processor, wherein the memory stores instructions which, when executed, cause the processor to designate at least a partial screen area of the touch screen as a motion detection area, determine whether a motion is detected in the motion detection area, and control the image sensor, in response to detecting the motion in the motion detection area, so as to perform super slow recording. Various other embodiments are also possible.
Abstract:
A stereo matching method includes extracting feature points of a first image and feature points of a second image, the first image and the second image together constituting a stereo image, determining reference points by matching the feature points of the second image to the feature points of the first image, classifying the reference points, and performing stereo matching on pixels of which disparities are not determined in the first image and the second image based on disparities of the reference points in the pixels determined based on a result of the classifying.
Abstract:
An electronic device is provided. The electronic device includes a camera including a plurality of lenses, a display, and a processor, in which the processor is configured to display a plurality of icons corresponding to the plurality of lenses, based on first position information in a first photographing mode, and upon selection of a first icon by a first gesture from the plurality of icons in the first photographing mode, switch to a second photographing mode and display a zoom control region including a plurality of zoom levels having a first zoom level of a first lens corresponding to the first icon as a reference zoom level and the plurality of icons rearranged based on second position information corresponding to the plurality of zoom levels.