Abstract:
An image capturing apparatus and an image capturing method are provided. The image capturing apparatus includes an image capturing unit configured to capture an image; and a controller connected to the image capturing unit, wherein the controller is configured to obtain a background image with depth information, position a three-dimensional (3D) virtual image representing a target object in the background image based on the depth information, and control the image capturing unit to capture the target object based on a difference between the target object viewed from the image capturing apparatus and the 3D virtual image in the background image.
Abstract:
A method and apparatus for detecting a three-dimensional (3D) point cloud point of interest (POI), the apparatus comprising a 3D point cloud data acquirer to acquire 3D point cloud data, a shape descriptor to generate a shape description vector describing a shape of a surface in which a pixel point of a 3D point cloud and a neighboring point of the pixel point are located, and a POI extractor to extract a POI based on the shape description vector is disclosed.
Abstract:
Provided is an image segmentation apparatus and method. The image segmentation method includes extracting, from an image, a feature map of the image; generating a second slot matrix by associating the feature map of the image with a first slot matrix corresponding to the image; and obtaining segmentation results of the image based on the second slot matrix.
Abstract:
A processor-implemented method with video processing includes: determining a first image feature of a first image of video data and a second image feature of a second image that is previous to the first image; determining a time-domain information fusion processing result by performing time-domain information fusion processing on the first image feature and the second image feature; and determining a panoptic segmentation result of the first image based on the time-domain information fusion processing result.
Abstract:
A method and apparatus of generating a three-dimensional (3D) image are provided. The method of generating a 3D image involves acquiring a plurality of images of a 3D object with a camera, calculating pose information of the plurality of images based on pose data for each of the plurality of images measured by an inertial measurement unit, and generating a 3D image corresponding to the 3D object based on the pose information.
Abstract:
A method of shuffling data may include shuffling a first batch of data using a first memory on a first level of a memory hierarchy to generate a first batch of shuffled data, shuffling a second batch of data using the first memory to generate a second batch of shuffled data, and storing the first batch of shuffled data and the second batch of shuffled data in a second memory on a second level of the memory hierarchy. The method may further include merging the first batch of shuffled data and the second batch of shuffled data. A data shuffling device may include a buffer memory configured to stream one or more records to a partitioning circuit and transfer, by random access, one or more records to a grouping circuit.
Abstract:
An image processing method and apparatus using a neural network are provided. The image processing method includes generating a plurality of augmented features by augmenting an input feature, and generating a prediction result based on the plurality of augmented features.
Abstract:
A user trigger intent determining method and apparatus is disclosed. The user trigger intent determining apparatus may obtain a first face image, obtain a second face image after a visual stimuli object is displayed, and determine a final gaze location by correcting a first gaze location estimated from the first face image based on a second gaze location estimated from the second face image.
Abstract:
A processor implemented method of processing a facial expression image, the method includes acquiring an expression feature of each of at least two reference facial expression images; generating a new expression feature based on an interpolation value of the expression feature; and adjusting a target facial expression image based on the new expression feature and creating a new facial expression image.
Abstract:
A face tracking apparatus includes: a face region detector; a segmentation unit; an occlusion probability calculator; and a tracking unit. The face region detector is configured to detect a face region based on an input image. The segmentation unit is configured to segment the face region into a plurality of sub-regions. The occlusion probability calculator configured to calculate occlusion probabilities for the plurality of sub-regions. The tracking unit is configured to track a face included in the input image based on the occlusion probabilities.