Abstract:
An image processing apparatus includes a processor configured to calculate a curvature value of a first point in stereo images based on a disparity value corresponding to the first point, and refine the disparity value based on the curvature value.
Abstract:
A fluid particle modeling method includes determining bubble cells based on locations of fluid particles, defining a bubble based on the bubble cells, calculating a pressure of the bubble based on a change in a volume of the bubble, and updating the locations of the fluid particles based on the pressure of the bubble.
Abstract:
A method of displaying an illumination includes: based on illumination information, determining, using a least one processor, an illumination area to which an illumination assigned in a virtual space is projected; and visualizing, using the at least one processor, an illumination effect produced by the illumination with respect to a determined boundary area in the illumination area, the determined boundary area including a boundary and a portion of the determined illumination area that is less than all of the determined illumination area.
Abstract:
A modeling method based on particles, the method including generating coarse particles by down-sampling target particles corresponding to at least a portion of a target object, calculating a correcting value enabling the coarse particles to satisfy constraints of the target object based on physical attributes of the target particles, applying the correcting value to the target particles, and redefining the target particles in response to the target particles to which the correcting value is applied satisfying the constraints.
Abstract:
Provided is a method of modeling a target object, that may obtain depth information from an image in which the target object is represented in a form of particles, obtain distance information between adjacent particles in the image, and detect a silhouette of the target object based on the depth information and the distance information between the adjacent particles in the image.
Abstract:
At least some example embodiments disclose a method and device for displaying a background image that may change an arrangement of an object based on image information associated with the background image and change a visual effect with respect to an adjacent region of the object.
Abstract:
Provided is a method and apparatus for computing an amount of deformation of an object. A parameter used to compute an amount of deformation in real time may be derived based on a shape model of the object, prior to computing an amount of deformation in real time. Accordingly, an amount of deformation of the object in real time may be predicted based on the parameter.
Abstract:
A vessel segmentation method includes acquiring an image of a blood vessel, including cross sections, using a contrast medium. The method further includes setting a threshold value for each of the cross sections based on data of an intensity of the contrast medium. The method further includes performing vessel segmentation based on the image and the threshold value for each of the cross sections.
Abstract:
A processor-implemented method includes: generating first input data comprising phase information of an input image; generating second input data in which lens position information is encoded; and determining position information of a lens corresponding to autofocus by inputting the first input data and the second input data to a neural network model.
Abstract:
A method and apparatus for image restoration based on burst images. The method includes generating a plurality of feature representations corresponding to individual images of a burst image set by encoding the individual images, determining a reference feature representation from among the plurality of feature representations, determining a first comparison pair including the reference feature representation and a first feature representation of the plurality of feature representations, generating a first motion-embedding feature representation of the first comparison pair based on a similarity score map of the reference feature representation and the first feature representation, generating a fusion result by fusing a plurality of motion-embedding feature representations including the first motion-embedding feature representation, and generating at least one restored image by decoding the fusion result.