Abstract:
Provided is a walking supporting apparatus for supporting a user walking by using a multi-sensor signal processing system that detects a walking intent. A palm sensor unit detects a force applied to a palm through a stick to generate a palm sensor signal. A sensor unit detects a force applied to a sole through the ground to generate a sole sensor signal. A portable information processing unit checks a user's walking intent by using the palm sensor signal, and if it is checked that the user has a walking intent, the portable information processing unit generates a driving signal in response to the sole sensor signal. A walking supporting mechanism includes a left motor attached to a user's left leg and a right motor attached to a user's right leg to support the user's walking when the left and right motors are driven in response to the driving signal.
Abstract:
Disclosed are an apparatus and a method for sensing photoplethysmogram and fall. According to the present invention, the apparatus and for sensing photoplethysmogram and fall may include: a sensor unit that senses acceleration and photoplethysmogram; a photoplethysmogram/fall determining module that synthetically tests sensed acceleration signals and photoplethysmogram signals sensed by the sensor unit to determine whether emergency occurs due to fall or emergency occurs due to photoplethysmogram; and a communication module that transmits the test results.
Abstract:
Disclosed are an apparatus for tracking a location of a hand, includes: a skin color image detector for detecting a skin color region from an image input from an image device using a predetermined skin color of a user; a face tracker for tracking a face using the detected skin color image; a motion detector for setting a ROI using location information of the tracked face, and detecting a motion image from the set ROI; a candidate region extractor for extracting a candidate region with respect to a hand of the user using the skin color image detected by the skin color image detector and the motion image detected by the motion detector; and a hand tracker for tracking a location of the hand in the extracted candidate region to find out a final location of the hand.
Abstract:
Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.
Abstract:
Disclosed are a device and a method for localizing a user indoors using a wireless local area network, and more particularly, a localization device and a localization method that improve localization accuracy by fusing various context information when localizing a user-portable/wearable device connected with a wireless network based on an RF-based wireless network such as ZigBee.
Abstract:
A method for calculating the effective volume of a diesel particulate filter invention may include: determining whether regeneration efficiency exists; determining, if regeneration efficiency exists, whether a learning condition of the ash coefficient is satisfied; detecting an exhaust flow amount Qexh if the learning condition of the ash coefficient is satisfied; calculating change of a pressure difference Δ(ΔPash(n)) caused by the ash; calculating change of an ash coefficient δ(a4) by using the change of the pressure difference Δ(ΔPash(n)) caused by the ash and the exhaust flow amount Qexh; calculating a current ash coefficient a4(n) by using the change of the ash coefficient δ(a4) and a previous ash coefficient a4(n-1); and calculating the effective volume Ve by using the current ash coefficient a4(n) and a first filter coefficient a1.
Abstract:
A method and apparatus for recognizing a gesture in an image processing system. In the apparatus, an input unit receives an image obtained by capturing a gesture of a user using a camera. A detector detects a face area in the input image, and detects a hand area in gesture search areas. The gesture search areas being set by dividing the image into predetermined areas with reference to a predetermined location of the detected face area. A controller sets the gesture search areas, determines whether a gesture occurs in the detected hand area, and selects a detection area with respect to the gesture to generate a control command for controlling an image device. A calculator calculates skin-color information and differential-area information for checking a gesture in the detected hand area. Accordingly, a hand area can be accurately detected, and a gesture can be separated from peripheral movement information, so that mal-functioning caused by gesture recognition can be reduced.
Abstract:
An etchant for removing an indium oxide layer includes sulfuric acid as a main oxidizer, an auxiliary oxidizer such as H3PO4, HNO3, CH3COOH, HClO4, H2O2, and a Compound A that is obtained by mixing potassium peroxymonosulfate (2KHSO5), potassium bisulfate (KHSO4), and potassium sulfate (K2SO4) together in the ratio of 5:3:2, an etching inhibitor comprising an ammonium-based material, and water. The etchant may remove desired portions of the indium oxide layer without damage to a photoresist pattern or layers underlying the indium oxide layer.
Abstract translation:用于除去氧化铟层的蚀刻剂包括硫酸作为主要氧化剂,辅助氧化剂如H 3 PO 4,HNO 3, CH 3 COOH,HClO 4,H 2 O 2,以及通过将钾 过硫酸氢盐(2KHSO 5),硫酸氢钾(KHSO 4)和硫酸钾(K 2 SO 4) 一起以5:3:2的比例,包含铵基材料的蚀刻抑制剂和水。 蚀刻剂可以去除铟氧化物层的期望部分,而不损害光致抗蚀剂图案或氧化铟层下面的层。
Abstract:
Disclosed is a method of detecting an upper body. The method includes detecting an omega candidate area including a shape formed of a face and a shoulder line of a human from a target image, cutting the target image into the upper body candidate area including the omega candidate area, detecting a human face from the upper body candidate area, and judging whether the upper body of the human is included in the target image according to the result of detecting the human face.
Abstract:
Disclosed is a method of extracting a text area, the method including generating a text area prediction value within an input second image based on a plurality of text area data stored in a database including geometric information about a text area of a first image, generating a text recognition result value by determining whether a text is recognized with respect to a probable text area within the input second image, and selecting a text area within the second image by combining the generated text area prediction value and text recognition result value.