Abstract:
An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator.
Abstract:
An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator.
Abstract:
An apparatus for detecting an interfacing region in a depth image detects the interfacing region based on a depth of a first region and a depth of a second region which is an external region of the first region in a depth image.
Abstract:
An apparatus recognizes an object using a hole in a depth image. An apparatus may include a foreground extractor to extract a foreground from the depth image, a hole determiner to determine whether a hole is present in the depth image, based on the foreground and a color image, a feature vector generator to generate a feature vector, by generating a plurality of features corresponding to the object based on the foreground and the hole, and an object recognizer to recognize the object, based on the generated feature vector and at least one reference feature vector.
Abstract:
A display information controlling apparatus and method are provided. The display information controlling apparatus may select at least one object from one or more objects based on a location of each of the one or more objects on a display and a location on the display corresponding to a user input signal. The display information controlling apparatus may perform a predetermined operation corresponding to the selected at least one object.
Abstract:
A method and system for generating an augmented reality (AR) scene may include obtaining real world information including multimedia information and sensor information associated with a real world, loading an AR locator representing a scheme for mixing the real world information and at least one virtual object content and the real world information onto an AR container, obtaining the at least one virtual object content corresponding to the real world information using the AR locator from a local storage or an AR contents server, and visualizing AR information by mixing the real world information and the at least one virtual object content based on the AR locator.
Abstract:
A method of generating three-dimensional (3D) volumetric data may be performed by generating a multilayer image, generating volume information and a type of a visible part of an object, based on the generated multilayer image, and generating volume information and a type of an invisible part of the object, based on the generated multilayer image. The volume information and the type of each of the visible part and invisible part may be generated based on the generated multilayered image which may be include at least one of a ray-casting-based multilayer image, a chroma key screen-based multilayer image, and a primitive template-based multilayer image.
Abstract:
Provided is a liveness verification method and device. A liveness verification device acquires a first image and a second image, and select one or more liveness models based on respective analyses of the first image and the second image, including analyses based on an object part being detected in the first image and/or the second image, and to verify, using the selected one or more liveness models, a liveness of the object based on the first image and/or the second image. The first image may be a color image and the second image may be an Infrared image.
Abstract:
A method to reduce a neural network includes: adding a reduced layer, which is reduced from a layer in the neural network, to the neural network; computing a layer loss and a result loss with respect to the reduced layer based on the layer and the reduced layer; and determining a parameter of the reduced layer based on the layer loss and the result loss.
Abstract:
An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree.