-
公开(公告)号:US20230018693A1
公开(公告)日:2023-01-19
申请号:US17453634
申请日:2021-11-04
Applicant: Tata Consultancy Services Limited
Inventor: Sushovan Chanda , Gauri Deshpande , Sachin Patel
Abstract: State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.
-
公开(公告)号:US12002291B2
公开(公告)日:2024-06-04
申请号:US17453634
申请日:2021-11-04
Applicant: Tata Consultancy Services Limited
Inventor: Sushovan Chanda , Gauri Deshpande , Sachin Patel
CPC classification number: G06V40/20 , G06F18/214 , G06N3/08 , G06T7/73 , G06V10/95 , G06V20/46 , G06V40/171 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06T2207/30201
Abstract: State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.
-
公开(公告)号:US11996118B2
公开(公告)日:2024-05-28
申请号:US17504556
申请日:2021-10-19
Applicant: Tata Consultancy Services Limited
Inventor: Ramesh Kumar Ramakrishnan , Venkata Subramanian Viraraghavan , Rahul Dasharath Gavas , Sachin Patel , Gauri Deshpande
Abstract: An important task in several wellness applications is detection of emotional valence from speech. Two types of features of speech signals are used to detect valence: acoustic features and text features. Acoustic features are derived from short frames of speech, while text features are derived from the text transcription. Present disclosure provides systems and methods that determine the effect of text on acoustic features. Acoustic features of speech segments carrying emotion words are to be treated differently from other segments that do not carry such words. Only specific speech segments of the input speech signal are considered based on a dictionary specific to a language to assess emotional valence. A model trained (or trained classifier) for specific language either by including the acoustic features of the emotion related words or by omitting it is used by the system for determining emotional valence in an input speech signal.
-
-