Abstract:
An organic light emitting diode (OLED) display apparatus having an optical sensing function is provided. The OLED display apparatus may photograph an external object by sensing input light from the external object that passes through an imaging pattern included in a display panel.
Abstract:
An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree.
Abstract:
An organic light emitting diode (OLED) display apparatus having an optical sensing function is provided. The OLED display apparatus may photograph an external object by sensing input light from the external object that passes through an imaging pattern included in a display panel.
Abstract:
A recognizer training method and apparatus includes selecting training data, generating clusters by clustering the selected training data based on a global shape parameter, and classifying training data from at least one cluster based on a local shape feature.
Abstract:
In an apparatus and method for controlling an interface, a user interface (UI) may be controlled using information on a hand motion and a gaze of a user without separate tools such as a mouse and a keyboard. That is, the UI control method provides more intuitive, immersive, and united control of the UI. Since a region of interest (ROI) sensing the hand motion of the user is calculated using a UI object that is controlled based on the hand motion within the ROI, the user may control the UI object in the same method and feel regardless of a distance from the user to a sensor. In addition, since positions and directions of view points are adjusted based on a position and direction of the gaze, a binocular 2D/3D image based on motion parallax may be provided.
Abstract:
A method of controlling a viewpoint of a user or a virtual object on a two-dimensional (2D) interactive display is provided. The method may convert a user input to at least 6 degrees of freedom (DOF) structured data according to a number of touch points, a movement direction thereof, and a rotation direction thereof. Any one of the virtual object and the viewpoint of the user may be determined as a manipulation target based on a location of the touch point.
Abstract:
An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator.
Abstract:
An apparatus recognizes an object using a hole in a depth image. An apparatus may include a foreground extractor to extract a foreground from the depth image, a hole determiner to determine whether a hole is present in the depth image, based on the foreground and a color image, a feature vector generator to generate a feature vector, by generating a plurality of features corresponding to the object based on the foreground and the hole, and an object recognizer to recognize the object, based on the generated feature vector and at least one reference feature vector.
Abstract:
An interfacing device for providing a user interface (UI) exploiting a multi-modality may recognize at least two modality inputs for controlling a scene, and generate scene control information based on the at least two modality inputs.
Abstract:
A system and method for learning a pose classifier based on a distributed learning architecture. A pose classifier learning system may include an input unit to receive an input of a plurality of pieces of learning data, and a plurality of pose classifier learning devices to receive an input of a plurality of learning data sets including the plurality of pieces of learning data, and to learn each pose classifier. The pose classifier learning devices may share learning information in each stage, using a distributed/parallel framework.