-
公开(公告)号:US10789456B2
公开(公告)日:2020-09-29
申请号:US15856271
申请日:2017-12-28
Applicant: Adobe Inc.
Inventor: Yu Luo , Xin Lu , Jen-Chan Jeff Chien
Abstract: Techniques are disclosed for a facial expression classification. In an embodiment, a multi-class classifier is trained using labelled training images, each training image including a facial expression. The trained classifier is then used to predict expressions for unlabelled video frames, whereby each frame is effectively labelled with a predicted expression. In addition, each predicted expression can be associated with a confidence score. Anchor frames can then be identified in the labelled video frames, based on the confidence scores of those frames (anchor frames are frames having a confidence score above an established threshold). Then, for each labelled video frame between two anchor frames, the predicted expression is refined or otherwise updated using interpolation, thereby providing a set of video frames having calibrated expression labels. These calibrated labelled video frames can then be used to further train the previously trained facial expression classifier, thereby providing a supplementally trained facial expression classifier.
-
公开(公告)号:US20190205625A1
公开(公告)日:2019-07-04
申请号:US15856271
申请日:2017-12-28
Applicant: Adobe Inc.
Inventor: Yu Luo , Xin Lu , Jen-Chan Jeff Chien
Abstract: Techniques are disclosed for a facial expression classification. In an embodiment, a multi-class classifier is trained using labelled training images, each training image including a facial expression. The trained classifier is then used to predict expressions for unlabelled video frames, whereby each frame is effectively labelled with a predicted expression. In addition, each predicted expression can be associated with a confidence score. Anchor frames can then be identified in the labelled video frames, based on the confidence scores of those frames (anchor frames are frames having a confidence score above an established threshold). Then, for each labelled video frame between two anchor frames, the predicted expression is refined or otherwise updated using interpolation, thereby providing a set of video frames having calibrated expression labels. These calibrated labelled video frames can then be used to further train the previously trained facial expression classifier, thereby providing a supplementally trained facial expression classifier.
-