DIALOGUE FLOW OPTIMIZATION AND PERSONALIZATION

    公开(公告)号:US20200007682A1

    公开(公告)日:2020-01-02

    申请号:US16567513

    申请日:2019-09-11

    Abstract: A method for generating a dialogue tree for an automated self-help system of a contact center from a plurality of recorded interactions between customers and agents of the contact center includes: computing, by a processor, a plurality of feature vectors, each feature vector corresponding to one of the recorded interactions; computing, by the processor, similarities between pairs of the feature vectors; grouping, by the processor, similar feature vectors based on the computed similarities into groups of interactions; rating, by the processor, feature vectors within each group of interactions based on one or more criteria, wherein the criteria include at least one of interaction time, success rate, and customer satisfaction; and outputting, by the processor, a dialogue tree in accordance with the rated feature vectors for configuring the automated self-help system.

    Data driven speech enabled self-help systems and methods of operating thereof

    公开(公告)号:US10515150B2

    公开(公告)日:2019-12-24

    申请号:US14799369

    申请日:2015-07-14

    Abstract: A method for configuring an automated, speech driven self-help system based on prior interactions between a plurality of customers and a plurality of agents includes: recognizing, by a processor, speech in the prior interactions between customers and agents to generate recognized text; detecting, by the processor, a plurality of phrases in the recognized text; clustering, by the processor, the plurality of phrases into a plurality of clusters; generating, by the processor, a plurality of grammars describing corresponding ones of the clusters; outputting, by the processor, the plurality of grammars; and invoking configuration of the automated self-help system based on the plurality of grammars.

    Fast out-of-vocabulary search in automatic speech recognition systems

    公开(公告)号:US10290301B2

    公开(公告)日:2019-05-14

    申请号:US15402070

    申请日:2017-01-09

    Abstract: A method including: receiving, on a computer system, a text search query, the query including one or more query words; generating, on the computer system, for each query word in the query, one or more anchor segments within a plurality of speech recognition processed audio files, the one or more anchor segments identifying possible locations containing the query word; post-processing, on the computer system, the one or more anchor segments, the post-processing including: expanding the one or more anchor segments; sorting the one or more anchor segments; and merging overlapping ones of the one or more anchor segments; and searching, on the computer system, the post-processed one or more anchor segments for instances of at least one of the one or more query words using a constrained grammar.

    SYSTEM AND METHOD FOR SEMANTICALLY EXPLORING CONCEPTS
    38.
    发明申请
    SYSTEM AND METHOD FOR SEMANTICALLY EXPLORING CONCEPTS 有权
    用于扫描概念的系统和方法

    公开(公告)号:US20160012818A1

    公开(公告)日:2016-01-14

    申请号:US14327476

    申请日:2014-07-09

    Abstract: A method for detecting and categorizing topics in a plurality of interactions includes: extracting, by a processor, a plurality of fragments from the plurality of interactions; filtering, by the processor, the plurality of fragments to generate a filtered plurality of fragments; clustering, by the processor, the filtered fragments into a plurality of base clusters; and clustering, by the processor, the plurality of base clusters into a plurality of hyper clusters.

    Abstract translation: 用于检测和分类多个交互中的主题的方法包括:由处理器从多个交互中提取多个片段; 由所述处理器对所述多个片段进行过滤以产生经过滤的多个片段; 由处理器将经滤波的片段聚类成多个基本簇; 以及由所述处理器将所述多个基本簇聚类成多个超群集。

    Emotion detection in audio interactions

    公开(公告)号:US11341986B2

    公开(公告)日:2022-05-24

    申请号:US16723154

    申请日:2019-12-20

    Abstract: A method comprising: receiving a plurality of audio segments comprising a speech signal, wherein said audio segments represent a plurality of verbal interactions; receiving labels associated with an emotional state expressed in each of said audio segments; dividing each of said audio segments into a plurality of frames, based on a specified frame duration; extracting a plurality of acoustic features from each of said frames; computing statistics over said acoustic features with respect to sequences of frames representing phoneme boundaries in said audio segments; at a training stage, training a machine learning model on a training set comprising: said statistics associated with said audio segments, and said labels; and at an inference stage, applying said trained model to one or more target audio segments comprising a speech signal, to detect an emotional state expressed in said target audio segments.

Patent Agency Ranking