Abstract:
A processor-implemented method includes generating a preprocessed infrared (IR) image by performing first preprocessing based on an IR image including an object; generating a preprocessed depth image by performing second preprocessing based on a depth image including the object; and determining whether the object is a genuine object based on the preprocessed IR image and the preprocessed depth image.
Abstract:
A processor-implemented method with object tracking includes: determining an initial template image based on an input bounding box and an input image; generating an initial feature map by extracting features from the initial template image; generating a transformed feature map by performing feature transformation adapted to objectness on the initial feature map; generating an objectness probability map and a bounding box map indicating bounding box information corresponding to each coordinate of the objectness probability map by performing objectness-based bounding box regression analysis on the transformed feature map; and determining a refined bounding box based on the objectness probability map and the bounding box map.
Abstract:
An interactive method includes displaying image content received through a television (TV) network, identifying an object of interest of a user among a plurality of regions or a plurality of objects included in the image content, and providing additional information corresponding to the object of interest.
Abstract:
A method of controlling a viewpoint of a user or a virtual object on a two-dimensional (2D) interactive display is provided. The method may convert a user input to at least 6 degrees of freedom (DOF) structured data according to a number of touch points, a movement direction thereof, and a rotation direction thereof. Any one of the virtual object and the viewpoint of the user may be determined as a manipulation target based on a location of the touch point.
Abstract:
An interactive method includes displaying image content received through a television (TV) network, identifying an object of interest of a user among a plurality of regions or a plurality of objects included in the image content, and providing additional information corresponding to the object of interest.
Abstract:
A method of generating three-dimensional (3D) volumetric data may be performed by generating a multilayer image, generating volume information and a type of a visible part of an object, based on the generated multilayer image, and generating volume information and a type of an invisible part of the object, based on the generated multilayer image. The volume information and the type of each of the visible part and invisible part may be generated based on the generated multilayered image which may be include at least one of a ray-casting-based multilayer image, a chroma key screen-based multilayer image, and a primitive template-based multilayer image.
Abstract:
A method and apparatus for detecting a liveness based on a phase difference are provided. The method includes generating a first phase image based on first visual information of a first phase, generating a second phase image based on second visual information of a second phase, generating a minimum map based on a disparity between the first phase image and the second phase image, and detecting a liveness based on the minimum map.
Abstract:
An image sensor includes: a motion detection circuit configured to detect a motion in image frames; and a micro control unit (MCU) configured to adjust at least a portion of a target frame among the image frames based on whether the motion is detected, and detect whether a target object is present based on the adjusted portion of the target frame.
Abstract:
An object classification method and apparatus are disclosed. The object classification method includes receiving an input image, storing first feature data extracted by a first feature extraction layer of a neural network configured to extract features of the input image, receiving second feature data from a second feature extraction layer which is an upper layer of the first feature extraction layer, generating merged feature data by merging the first feature data and the second feature data, and classifying an object in the input image based on the merged feature data.
Abstract:
A processor-implemented method includes: generating a preprocessed infrared (IR) image by performing first preprocessing based on an IR image including an object; generating a preprocessed depth image by performing second preprocessing based on a depth image including the object; and determining whether the object is a genuine object based on the preprocessed IR image and the preprocessed depth image.