Abstract:
Provided is a gesture recognition apparatus. The gesture recognition apparatus includes a human detection unit, a gesture region setting region, an arm detection unit and a gesture determination unit. The human detection unit detects a face region of a user from an input image. The gesture region setting unit sets a gesture region, in which a gesture of the user's arm occurs, with respect to the detected face region. The arm detection unit detects an arm region of the user in the gesture region. The gesture determination unit analyzes a position, moving directionality and shape information of the arm region in the gesture region to determine a target gesture of the user. Such a gesture recognition apparatus may be used as a useful means for a human-robot interaction in a long distance where a robot has difficulty in recognizing a user's voice.
Abstract:
Disclosed is a typewriter system and a text input method capable of accurately recognizing words by correcting words input using a mediated interface device based on a dictionary. A plurality of texts are combined by referencing a text recognition order set in which recognition results of texts are arranged according to a recognition order from texts input through the mediated interface device and the combined text is filtered using part index maps formed of part words that are an accumulated set of texts forming complete words. The part words passing through the part index maps is again filtered using a dictionary including context information formed of a set of words in a specific category, thereby making it possible to accurately recognize the words. The part words that cannot form words in a dictionary are removed in advance using the part index maps, thereby improving the recognition efficiency.
Abstract:
Disclosed are a device and a method for localizing a user indoors using a wireless local area network, and more particularly, a localization device and a localization method that improve localization accuracy by fusing various context information when localizing a user-portable/wearable device connected with a wireless network based on an RF-based wireless network such as ZigBee.
Abstract:
Provided are a thimble-type intermediation device and a method for recognizing a finger gesture using the same. The thimble-type intermediation device includes: a motion sensing unit sensing a motion of a user's finger and generating the sensed result as motion data; a tactile sensing block sensing a tactile behavior of the user's finger and generating the sensed result as tactile data; a control unit recognizing the gesture and tactile behavior of the user's finger on the basis of the generated motion data and tactile data, and outputting the recognition result as recognition result information; and a wireless communication unit transmitting the recognition result information to a robot system.
Abstract:
Provided are a human recognition apparatus and a human recognition method identifying a user based on a walking pattern. The human recognition apparatus includes a detecting unit detecting a vibration according to a user's walking, and outputting an electric signal, a pattern calculating unit acquiring a walker's walking pattern from the electric signal, and a user determining unit comparing the walking pattern with a previously measured reference data by user and identifying the user based on the comparison result. The human recognition apparatus and the human recognition method are robust against peripheral noise and can increase an acceptance rate through a simple structure and procedure by using the waling pattern, which is one-dimensional time information requiring no vast data throughput, as the user identification data.
Abstract:
A gesture spotting detection method and apparatus employ a shoulder-line algorithm. The shoulder-line detecting method recognizes a GSD calling gesture that occurs in a shoulder-line, head or higher part in a remote distance or a short distance, although a user does not have a fixed posture. In the method, an image of people is received, and skin information of a person in the image is detected to detect a face area. Then, the cloth color information of the person is modeled from the inputted image to detect a cloth area. An external space is defined from the image based on the body space area, and an edge is extracted from the image based on the body space and the external space. Then, shoulder-line information is acquired based on an energy function obtained based on the body space, the external space, and the edge.
Abstract:
Provided is a walking supporting apparatus for supporting a user walking by using a multi-sensor signal processing system that detects a walking intent. A palm sensor unit detects a force applied to a palm through a stick to generate a palm sensor signal. A sensor unit detects a force applied to a sole through the ground to generate a sole sensor signal. A portable information processing unit checks a user's walking intent by using the palm sensor signal, and if it is checked that the user has a walking intent, the portable information processing unit generates a driving signal in response to the sole sensor signal. A walking supporting mechanism includes a left motor attached to a user's left leg and a right motor attached to a user's right leg to support the user's walking when the left and right motors are driven in response to the driving signal.
Abstract:
Provided is a gesture recognition apparatus. The gesture recognition apparatus includes a human detection unit, a gesture region setting region, an arm detection unit and a gesture determination unit. The human detection unit detects a face region of a user from an input image. The gesture region setting unit sets a gesture region, in which a gesture of the user's arm occurs, with respect to the detected face region. The arm detection unit detects an arm region of the user in the gesture region. The gesture determination unit analyzes a position, moving directionality and shape information of the arm region in the gesture region to determine a target gesture of the user. Such a gesture recognition apparatus may be used as a useful means for a human-robot interaction in a long distance where a robot has difficulty in recognizing a user's voice.
Abstract:
A user recognizing system and method is provided. According to the user recognizing system and method, user ID and predetermined user feature information are stored, first and second user feature information are extracted from the user image data transmitted from the image input unit, and first and second probabilities that the extracted first and second user feature information determine the predetermined user are respectively generated based on the information stored at the user information database, the first user feature information being absolutely unique biometric information and the second user feature information being unique semibiometric information under a predetermined condition, and ID of the input image is finally determined by combining the first probability and the second probability. According to the user recognizing system and method, a user identity can be authenticated even when the user freely moves.
Abstract:
A texture-based image database browsing and sorting method computes the number of edge pixels of objects in static images, measures textures of the static images by numerating the number of edge pixels thereof and measures a texture of a query image by numerating the number of edge pixels of an object in the query image. Then, the method sorts the measured textures according to a sorting order and searches a texture close to the texture of the query image among the sorted textures.