-
公开(公告)号:US20220270636A1
公开(公告)日:2022-08-25
申请号:US17472511
申请日:2021-09-10
Inventor: Jianhua TAO , Zheng LIAN , Bin LIU , Xuefei LIU
Abstract: Disclosed is a dialogue emotion correction method based on a graph neural network, including: extracting acoustic features, text features, and image features from a video file to fuse them into multi-modal features; obtaining an emotion prediction result of each sentence of a dialogue in the video file by using the multi-modal features; fusing the emotion prediction result of each sentence with interaction information between talkers in the video file to obtain interaction information fused emotion features; combining, on the basis of the interaction information fused emotion features, with context-dependence relationship in the dialogue to obtain time-series information fused emotion features; correcting, by using the time-series information fused emotion features, the emotion prediction result of each sentence that is obtained previously as to obtain a more accurate emotion recognition result.
-
公开(公告)号:US20220265184A1
公开(公告)日:2022-08-25
申请号:US17472191
申请日:2021-09-10
Inventor: Jianhua TAO , Cong CAI , Bin LIU , Mingyue NIU
IPC: A61B5/16 , G06K9/00 , G06K9/62 , G06T7/00 , G10L25/30 , G10L25/57 , G10L25/63 , G10L25/66 , G06N3/08 , A61B5/00
Abstract: Disclosed is an automatic depression detection method using audio-video, including: acquiring original data containing two modalities of long-term audio file and long-term video file from an audio-video file; dividing the long-term audio file into several audio segments, and meanwhile dividing the long-term video file into a plurality of video segments; inputting each audio segment/each video segment into an audio feature extraction network/a video feature extraction network to obtain in-depth audio features/in-depth video features; calculating the in-depth audio features and the in-depth video features by using multi-head attention mechanism so as to obtain attention audio features and attention video features; aggregating the attention audio features and the attention video features into audio-video features; and inputting the audio-video features into a decision network to predict a depression level of an individual in the audio-video file.
-
3.
公开(公告)号:US20220269881A1
公开(公告)日:2022-08-25
申请号:US17471384
申请日:2021-09-10
Inventor: Jianhua TAO , Hao ZHANG , Bin LIU , Wenxiang SHE
Abstract: Disclosed is a micro-expression recognition method based on a multi-scale spatiotemporal feature neural network, in which spatial features and temporal features of micro-expression are obtained from micro-expression video frames, and combined together to form more robust micro-expression features, at the same time, since the micro-expression occurs in local areas of a face, active local areas of the face during occurrence of the micro-expression and an overall area of the face are combined together for micro-expression recognition.
-
-