-
公开(公告)号:US20240081705A1
公开(公告)日:2024-03-14
申请号:US18511919
申请日:2023-11-16
发明人: Liang ZHAO , Sannyuya LIU , Zongkai YANG , Xiaoliang ZHU , Jianwen SUN , Qing LI , Zhicheng DAI
IPC分类号: A61B5/16 , A61B5/00 , A61B5/0205 , A61B5/1171 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/16
CPC分类号: A61B5/16 , A61B5/0205 , A61B5/1176 , A61B5/725 , A61B5/726 , A61B5/7264 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/161 , A61B5/02427
摘要: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate. The final step involves multi-channel data fusion of respiration and heartbeats, succeeding in not only learning person-independent features for fatigue detection but also detecting early fatigue with very high accuracy.
-
2.
公开(公告)号:US20230334862A1
公开(公告)日:2023-10-19
申请号:US18011847
申请日:2021-09-07
发明人: Sannyuya LIU , Zengzhao CHEN , Zhicheng DAI , Shengming WANG , Xiuling HE , Baolin YI
CPC分类号: G06V20/44 , G10L25/78 , G10L25/57 , G06V20/49 , G06V20/41 , G06V10/774 , G06V40/20 , G06Q50/205
摘要: The present invention discloses construction method and system of a descriptive model of classroom teaching behavior events. The construction method includes steps as the followings: acquiring classroom teaching video data to be trained; dividing the classroom teaching video data to be trained into multiple events according to utterances of a teacher by using a voice activity detection technology; and performing multi-modal recognition on all events by using multiple artificial intelligence technologies to divide the events into sub-events in multiple dimensions, establishing an event descriptive model according to the sub-events, and describing various teaching behavior events of the teacher in a classroom. The present invention divides a classroom video according to voice, which can ensure the completeness of the teacher's non-verbal behavior in each event to the greatest extent. Also, a descriptive model that uniformly describes all events is established by extracting commonality between different events, which can not only complete the description of various teaching behaviors of the teacher, but also reflect the correlation between events, so that the events are no longer isolated.
-
3.
公开(公告)号:US20240000345A1
公开(公告)日:2024-01-04
申请号:US18038213
申请日:2021-04-22
发明人: Zongkai YANG , Sannyuya LIU , Liang ZHAO , Zhicheng DAI , Jianwen SUN , Qing LI
摘要: Disclosed are a millimeter-wave (mmWave) radar-based non-contact identity recognition method and system. The method comprises: emitting an mmWave radar signal to a user to be recognized, and receiving an echo signal reflected from the user; performing clutter suppression and echo selection on the echo signal, and extracting a heartbeat signal; segmenting the heartbeat signal beat by beat, and determining its corresponding beat features; and comparing the beat features of the user with the beat feature sets of a standard user group; if the beat features of the user matches one of the beat feature set in the standard user group, the identity recognition being successful; otherwise, being not successful. According to the method, the use of a heartbeat signal for identity recognition has high reliability, and the use of an mmWave radar technology for non-contact identity recognition has high flexibility and accuracy.
-
公开(公告)号:US20200098284A1
公开(公告)日:2020-03-26
申请号:US16697205
申请日:2019-11-27
发明人: Zongkai YANG , Jingying CHEN , Sannvya LIU , Ruyi XU , Kun ZHANG , Leyuan LIU , Shixin PENG , Zhicheng DAI
摘要: The invention provides a classroom cognitive load detection system belonging to the field of education informationization, which includes the following. A task completion feature collecting module records an answer response time and a correct answer rate of a student when completing a task. A cognitive load self-assessment collecting module quantifies and analyzes a mental effort and a task subjective difficulty by a rating scale. An expression and attention feature collecting module collects a student classroom performance video to obtain a face region through a face detection and counting a smiley face duration and a watching duration of the student according to a video analysis result. A feature fusion module fuses aforesaid six indexes into a characteristic vector. A cognitive load determining module inputs the characteristic vector to a classifier to identify a classroom cognitive load level of the student.
-
公开(公告)号:US20240023884A1
公开(公告)日:2024-01-25
申请号:US18038989
申请日:2021-06-23
发明人: Sannyuya LIU , Zongkai YANG , Liang ZHAO , Xiaoliang ZHU , Zhicheng DAI
IPC分类号: A61B5/00 , A61B5/05 , G06V40/16 , A61B5/0205
CPC分类号: A61B5/48 , A61B5/05 , G06V40/161 , A61B5/7267 , A61B5/0205 , A61B5/024
摘要: Disclosed are a non-contact fatigue detection method and system. The method comprises: sending a millimeter-wave (mmWave) radar signal to a person being detected, receiving an echo signal reflected from the person, and determining a time-frequency domain feature, a non-linear feature and a time-series feature of a vital sign signal; acquiring a facial video image of the person, and performing facial detection and alignment on the basis of the facial video image, for extracting a time domain feature and a spatial domain feature of the person's face; fusing the determined vital sign signal with the time domain feature and the spatial domain feature of the person's face, for obtaining a fused feature; and determining whether the person is in a fatigued state by the fused feature. By fusing the two detection techniques, the method effectively suppressing the interference of subjective and objective factors, and improving the accuracy of fatigue detection.
-
公开(公告)号:US20230298382A1
公开(公告)日:2023-09-21
申请号:US18322517
申请日:2023-05-23
发明人: Sannyuya LIU , Zongkai YANG , Xiaoliang ZHU , Zhicheng DAI , Liang ZHAO
CPC分类号: G06V40/165 , G06V40/174 , G06V10/82 , G06V40/171 , G06V10/7715 , G06V10/62 , G06V20/41 , G06V10/247 , G06V10/806 , G06V10/774
摘要: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
-
-
-
-
-