Real-time emotion tracking system
    12.
    发明授权
    Real-time emotion tracking system 有权
    实时情感跟踪系统

    公开(公告)号:US09355650B2

    公开(公告)日:2016-05-31

    申请号:US14703107

    申请日:2015-05-04

    CPC classification number: G10L25/63 G10L17/04 G10L17/26 G10L25/48

    Abstract: Devices, systems, methods, media, and programs for detecting an emotional state change in an audio signal are provided. A plurality of segments of the audio signal is received, with the plurality of segments being sequential. Each segment of the plurality of segments is analyzed, and, for each segment, an emotional state and a confidence score of the emotional state are determined. The emotional state and the confidence score of each segment are sequentially analyzed, and a current emotional state of the audio signal is tracked throughout each of the plurality of segments. For each segment, it is determined whether the current emotional state of the audio signal changes to another emotional state based on the emotional state and the confidence score of the segment.

    Abstract translation: 提供了用于检测音频信号中的情绪状态改变的设备,系统,方法,媒体和程序。 接收音频信号的多个段,其中多个段是顺序的。 分析多个片段中的每个片段,并且针对每个片段,确定情感状态的情绪状态和置信评分。 顺序地分析每个片段的情绪状态和置信度得分,并且在多个片段中的每一个片段跟踪音频信号的当前情绪状态。 对于每个片段,基于片段的情绪状态和置信度分数确定音频信号的当前情绪状态是否改变到另一情感状态。

    Real—time emotion tracking system
    13.
    发明授权
    Real—time emotion tracking system 有权
    实时情感跟踪系统

    公开(公告)号:US09047871B2

    公开(公告)日:2015-06-02

    申请号:US13712288

    申请日:2012-12-12

    CPC classification number: G10L25/63 G10L17/04 G10L17/26 G10L25/48

    Abstract: Devices, systems, methods, media, and programs for detecting an emotional state change in an audio signal are provided. A plurality of segments of the audio signal is received, with the plurality of segments being sequential. Each segment of the plurality of segments is analyzed, and, for each segment, an emotional state and a confidence score of the emotional state are determined. The emotional state and the confidence score of each segment are sequentially analyzed, and a current emotional state of the audio signal is tracked throughout each of the plurality of segments. For each segment, it is determined whether the current emotional state of the audio signal changes to another emotional state based on the emotional state and the confidence score of the segment.

    Abstract translation: 提供了用于检测音频信号中的情绪状态改变的设备,系统,方法,媒体和程序。 接收音频信号的多个段,其中多个段是顺序的。 分析多个片段中的每个片段,并且针对每个片段,确定情感状态的情绪状态和置信评分。 顺序地分析每个片段的情绪状态和置信度得分,并且在多个片段中的每一个片段跟踪音频信号的当前情绪状态。 对于每个片段,基于片段的情绪状态和置信度分数确定音频信号的当前情绪状态是否改变到另一情感状态。

    System and Method for Enhancing Voice-Enabled Search Based on Automated Demographic Identification
    14.
    发明申请
    System and Method for Enhancing Voice-Enabled Search Based on Automated Demographic Identification 有权
    基于自动人口识别的增强语音搜索的系统和方法

    公开(公告)号:US20130218561A1

    公开(公告)日:2013-08-22

    申请号:US13847173

    申请日:2013-03-19

    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.

    Abstract translation: 本文公开的是基于包括说话者的人口统计特征的元数据的用于在基于语音的搜索中近似对用户语音查询的响应的系统,方法和非暂时计算机可读存储介质。 实施该方法的系统识别来自扬声器的接收到的语音以产生识别的语音,从接收到的语音识别关于说话者的元数据,并将识别的语音和元数据馈送到问答引擎。 识别关于扬声器的元数据是基于所接收语音的语音特征。 人口特征可以包括年龄,性别,社会经济群体,国籍和/或地区。 从接收到的语音中识别的关于说话者的元数据可以与自报告的说话者人口统计信息进行组合或覆盖。

    System and method for data-driven socially customized models for language generation
    19.
    发明授权
    System and method for data-driven socially customized models for language generation 有权
    用于语言生成的数据驱动的社会定制模型的系统和方法

    公开(公告)号:US09412358B2

    公开(公告)日:2016-08-09

    申请号:US14275938

    申请日:2014-05-13

    Abstract: Systems, methods, and computer-readable storage devices for generating speech using a presentation style specific to a user, and in particular the user's social group. Systems configured according to this disclosure can then use the resulting, personalized, text and/or speech in a spoken dialogue or presentation system to communicate with the user. For example, a system practicing the disclosed method can receive speech from a user, identify the user, and respond to the received speech by applying a personalized natural language generation model. The personalized natural language generation model provides communications which can be specific to the identified user.

    Abstract translation: 用于使用特定于用户的演示风格来产生语音的系统,方法和计算机可读存储设备,特别是用户的社交组。 根据本公开配置的系统然后可以使用口头对话或呈现系统中的结果,个性化,文本和/或语音来与用户通信。 例如,实施所公开的方法的系统可以从用户接收语音,识别用户,并且通过应用个性化的自然语言生成模型对接收到的语音进行响应。 个性化的自然语言生成模型提供可以对所识别的用户特定的通信。

Patent Agency Ranking