Abstract:
본 발명의 일 실시예에 따른 전면 카메라 및 후면 카메라를 통해 영상을 촬영하는 영상 촬영 장치는 디스플레이부와 상기 후면 카메라를 통해 상기 디스플레이부의 프리뷰 화면에 표시된 사용자의 얼굴 영상으로부터 얼굴 특징을 추출하는 특징 추출부와 상기 추출된 얼굴 특징을 이용하여 상기 사용자의 얼굴 구도를 추출하는 구도 추출부와 상기 추출된 얼굴 구도가 상기 기준 얼굴 구도와 일치하는 경우, 상기 추출된 얼굴 특징을 이용하여 상기 사용자의 표정을 추출하는 표정 추출부 및 상기 추출된 사용자의 표정이 기준 얼굴 표정과 일치하는 경우, 촬영 알림 신호를 출력하는 알람부를 포함한다.
Abstract:
Presented is method and system for processing a user input provided by a user of an input device. The method comprises detecting the user input and obtaining a visual and audio representation of the user's actions. A user activity is determined from the obtained audio and visual representations. A user command is then determined based on the detected user input and the determined user activity.
Abstract:
A method, apparatus and computer program product are provided for identifying an unknown subject using face recognition. In particular, upon receiving a plurality of images depicting a subject, the method may include deriving and storing a common component image and a gross innovation component image associated with the subject, wherein the subject can later be identified in a new image using these two stored images. The common component image may capture features that are common to all of the received images depicting the subject, whereas the gross innovation component image may capture a combination of the features that are unique to each of the received images. The method may further include deriving and storing a low-rank data matrix associated with the received images, wherein the low-rank data matrix may capture any illumination differences and/or occlusions associated with the received images.
Abstract:
The present invention relates to a method for synthesis of a non-primary facial expression. The method comprises acquiring a facial image and calculating a shape and a texture for each of a plurality of primary facial expressions for the facial image. The method further comprises the step of generating a subject-specific model of the face including each of the primary facial expressions using the calculated shapes and textures. The method also comprises selecting an expression vector corresponding to a non-primary facial expression to be synthesised and applying the expression vector to the model to synthesise a facial image having the selected non-primary facial expression. The invention also relates to a system for synthesis of a non-primary facial expression and to a method for generating an expression look-up table.
Abstract:
본 발명은 다감각정보를 이용한 정서 인지능력 검사 시스템 및 방법, 다감각정보를 이용한 정서 인지 훈련 시스템 및 방법에 관한 것으로, 보다 구체적으로는 대상자의 적어도 하나의 감정상태로 이루어지는 다감각정보를 외부로 출력하는 출력부; 출력된 상기 다감각정보에 기초하여 피실험자로부터 상기 대상자의 적어도 하나의 감정상태에 대한 동일여부를 나타내는 감정상태정보를 입력받는 입력부; 입력받은 상기 감정상태정보가 저장부에 기저장된 상기 다감각정보에 해당하는 기준 감정상태정보와 동일한지 여부를 확인하는 비교확인부; 및 입력받은 상기 감정상태정보의 확인결과에 따라 상기 피실험자의 정서인지능력을 판단하는 제어부;를 포함한다. 이러한 구성에 의해, 본 발명의 다감각정보를 이용한 정서 인지능력 검사 시스템 및 방법, 다감각정보를 이용한 정서 인지 훈련 시스템 및 방법은 다감각정보를 이용하여 대상자의 정서 상태를 판단하여 타인에 대한 정서 인지능력을 용이하게 확인할 수 있는 효과가 있다.
Abstract:
Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to "break" an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
Abstract:
The invention relates to a method 20 for accessing a service. According to the invention, the method comprises the following steps. A device 16 compares, at least once, two images captured by at least one camera 162. And only if the two captured images do not match, then the device authorizes to access at least one requested service. The invention also relates to corresponding device and system.
Abstract:
Techniques are disclosed that involve the detection of smiles from images. Such techniques may employ local-binary pattern (LBP) features and/or multi-layer perceptrons (MLP) based classifiers. Such techniques can be extensively used on various devices, including (but not limited to) camera phones, digital cameras, gaming devices, personal computing platforms, and other embedded camera devices.