Abstract:
A method and an apparatus for diagnosing cardiac diseases based on a cardiac motion modeling are provided. The method may include applying physical characteristics of a cardiac motion to a 3D heart shape model, deriving a boundary condition by fusing the 3D heart shape model to which the physical characteristics are applied and a plurality of cardiac ultrasound images according to a temporal change, obtained to acquire a dynamic image, and diagnosing the cardiac diseases using a result of modeling that models the cardiac motion of the user using the boundary condition.
Abstract:
A sampler of an image processing apparatus may sample at least one first virtual point light (VPL) from a direct light view. The sampler may sample a second VPL in a three-dimensional (3D) space independent of the direct light view. A calculator may calculate a luminance of the second VPL using a first VPL adjacent to the second VPL selected from among the at least one first VPL.
Abstract:
A method with image processing includes: setting an offset window for an offset pattern of a kernel offset and an offset parameter for an application intensity of the kernel offset; determining an output kernel by applying the kernel offset to an input kernel based on the offset window and the offset parameter; and adjusting contrast of a degraded image using the output kernel.
Abstract:
A pose estimation method and apparatus is disclosed. The pose estimation method includes acquiring a raw image before a geometric correction from an image sensor, determining a feature point in the raw image, and estimating a pose based on the feature point.
Abstract:
A neural network-based image processing method and apparatus are provided. The method includes receiving an input image having a first resolution, and estimating a preview image of the input image having a second resolution that is lower than the first resolution by using a neural network model.
Abstract:
A method, implemented by a processor, of correcting lighting of an image includes inputting an input image to a first neural network and generating predicted lighting data corresponding to lighting of the input image and embedding data corresponding to a feature of the input image, inputting the generated predicted lighting data, the generated embedding data, and sensor data to a second neural network and generating a lighting weight corresponding to the input image, and generating correction lighting data for the input image by applying the generated lighting weight to preset basis lighting data corresponding to the input image.
Abstract:
An object detection method includes setting a first window region and a second window region larger than the first window region that correspond to partial regions of different sizes in an input image, downsampling the second window region to generate a resized second window region, detecting a first object candidate from the first window region and a second object candidate from the resized second window region, and detecting an object included in the input image based on one or both of the first object candidate and the second object candidate.
Abstract:
A method with image augmentation includes recognizing, based on a gaze of the user corresponding to the input image, any one or any combination of any two or more of an object of interest of a user, a situation of the object of interest, and a task of the user from partial regions of an input image determining relevant information indicating an intention of the user, based on any two or any other combination of the object of interest of the user, the situation of the object of interest, and the task of the user, and generating a visually augmented image by visually augmenting the input image based on the relevant information.
Abstract:
A method and apparatus for controlling an augmented reality (AR) apparatus are provided. The method includes acquiring a video, detecting a human body from the acquired video, performing an action prediction with regard to the detected human body, and controlling the AR apparatus based on a result of the action prediction and a mapping relationship between human body actions and AR functions.
Abstract:
A method and apparatus for predicting an intention acquires a gaze sequence of a user, acquires an input image corresponding to the gaze sequence, generates a coded image by visually encoding temporal information included in the gaze sequence to the input image, and predicts an intention of the user corresponding to the gaze sequence based on the input image and the coded image.