Abstract:
Provided is a calibration apparatus and method of a three-dimensional (3D) position and direction estimation system. The calibration apparatus may receive inertia information and intensity information during a predetermined period of time, may calculate distances between a transmitter and the respective receivers, and may calibrate a signal attenuation characteristic of each receiver using the distances between the transmitter and the respective receivers.
Abstract:
A method of determining a focal length and an electronic device that performs the method are provided. The electronic device includes a processor, a camera comprising a lens, and a memory configured to store one or more instructions executable by the processor. The processor is configured to receive a plurality of original images with the lens in response to the one or more instructions being executed, generate a composite image based on the plurality of original images, and determine a focal length of the lens by inputting one or more original images of the plurality of original images and the composite image to an autofocus (AF) model.
Abstract:
A processor-implemented method includes: obtaining a plurality of image frames acquired for a scene within a predetermined time; determining loss values respectively corresponding to the plurality of image frames; determining a reference frame among the plurality of image frames based on the loss values; and generating a final image of the scene based on the reference frame.
Abstract:
An image restoration method and apparatus are provided. The image restoration method includes determining auxiliary data corresponding to a plurality of filter kernels by filtering target data with the plurality of filter kernels, determining new input data by combining the auxiliary data with at least some input data of layers of a neural network-based image restoration model, generating, based on the new input data, a restored image of the input image by executing the neural network-based image restoration model, wherein the filter kernels are not part of the neural network-based image restoration model.
Abstract:
A processor-implemented method with degraded image restoration includes: receiving a degraded training image; training a first teacher network of an image restoration network and a second teacher network of the image restoration network to infer differential images corresponding to the degraded training image, wherein each of the first teacher network and the second teacher network comprises a differentiable activation layer and a performance of the first teacher network is greater than that of the second teacher network; initially setting a student network of the image restoration network based on the second teacher network; and training the student network to infer a differential image corresponding to the degraded training image by iteratively backpropagating, to the student network, a contrastive loss that decreases a first difference between a third output of the student network and a first output of the first teacher network and increases a second difference between the third output and a second output of the second teacher network.
Abstract:
A method and apparatus with image correction is provided. A processor-implemented method includes generating, using a neural network model provided an input image, an illumination map including illumination values dependent on respective color casts by one or more illuminants individually affecting each pixel of the input image, and generating a white-adjusted image by removing at least a portion of the color casts from the input image using the generated illumination map.
Abstract:
A method with image restoration includes: receiving an input image and a first task vector indicating a first image effect among candidate image effects; extracting a common feature shared by the candidate image effects from the input image, based on a task-agnostic architecture of a source neural network; and restoring the common feature to a first restoration image corresponding to the first image effect, based on a task-specific architecture of the source neural network and the first task vector.
Abstract:
A method with image augmentation includes recognizing, based on a gaze of the user corresponding to the input image, any one or any combination of any two or more of an object of interest of a user, a situation of the object of interest, and a task of the user from partial regions of an input image determining relevant information indicating an intention of the user, based on any two or any other combination of the object of interest of the user, the situation of the object of interest, and the task of the user, and generating a visually augmented image by visually augmenting the input image based on the relevant information.
Abstract:
An image processing apparatus includes a first processor configured to obtain, from a color image, an illumination element image and an albedo element image corresponding to the color image, and a second processor configured to divide the illumination element image into a plurality of subelement images each corresponding to the color image.
Abstract:
A method of refining a depth image includes extracting shading information of color pixels from a color image, and refining a depth image corresponding to the color image based on surface normal information of an object included in the shading information.