Keyword detection with international phonetic alphabet by foreground model and background model
    1.
    发明授权
    Keyword detection with international phonetic alphabet by foreground model and background model 有权
    用前景模型和背景模型对国际语音字母进行关键词检测

    公开(公告)号:US09466289B2

    公开(公告)日:2016-10-11

    申请号:US14103775

    申请日:2013-12-11

    IPC分类号: G10L15/06 G10L15/08

    摘要: An electronic device with one or more processors and memory trains an acoustic model with an international phonetic alphabet (IPA) phoneme mapping collection and audio samples in different languages, where the acoustic model includes: a foreground model; and a background model. The device generates a phone decoder based on the trained acoustic model. The device collects keyword audio samples, decodes the keyword audio samples with the phone decoder to generate phoneme sequence candidates, and selects a keyword phoneme sequence from the phoneme sequence candidates. After obtaining the keyword phoneme sequence, the device detects one or more keywords in an input audio signal with the trained acoustic model, including: matching phonemic keyword portions of the input audio signal with phonemes in the keyword phoneme sequence with the foreground model; and filtering out phonemic non-keyword portions of the input audio signal with the background model.

    摘要翻译: 具有一个或多个处理器和存储器的电子设备具有使用不同语言的国际语音字母(IPA)音素映射收集和音频样本的声学模型,其中声学模型包括:前景模型; 和背景模型。 该设备基于经过训练的声学模型生成电话解码器。 设备收集关键字音频样本,用手机解码器解码关键词音频样本,以产生音素序列候选,并从音素序列候选中选择关键词音素序列。 在获得关键字音素序列之后,设备利用经训练的声学模型检测输入音频信号中的一个或多个关键词,包括:使用前景模型将关键字音素序列中的输入音频信号的音素关键词部分与音素相匹配; 并用背景模型滤出输入音频信号的音素非关键字部分。

    Keyword detection for speech recognition
    2.
    发明授权
    Keyword detection for speech recognition 有权
    语音识别的关键字检测

    公开(公告)号:US09230541B2

    公开(公告)日:2016-01-05

    申请号:US14567969

    申请日:2014-12-11

    IPC分类号: G10L15/08

    摘要: This application discloses a method implemented of recognizing a keyword in a speech that includes a sequence of audio frames further including a current frame and a subsequent frame. A candidate keyword is determined for the current frame using a decoding network that includes keywords and filler words of multiple languages, and used to determine a confidence score for the audio frame sequence. A word option is also determined for the subsequent frame based on the decoding network, and when the candidate keyword and the word option are associated with two distinct types of languages, the confidence score of the audio frame sequence is updated at least based on a penalty factor associated with the two distinct types of languages. The audio frame sequence is then determined to include both the candidate keyword and the word option by evaluating the updated confidence score according to a keyword determination criterion.

    摘要翻译: 本申请公开了一种实现的方法,其中识别语音中的关键字,其中包括进一步包括当前帧和后续帧的音频帧序列。 使用包括多种语言的关键词和填充词的解码网络为当前帧确定候选关键字,并且用于确定音频帧序列的置信度分数。 还基于解码网络为后续帧确定字选项,并且当候选关键词和词选项与两种不同类型的语言相关联时,至少基于惩罚来更新音频帧序列的置信度得分 与两种不同类型语言相关联的因素。 然后通过根据关键字确定标准评估更新的可信度得分,确定音频帧序列以包括候选关键词和词选项。

    Reminder setting method and apparatus

    公开(公告)号:US09754581B2

    公开(公告)日:2017-09-05

    申请号:US13903593

    申请日:2013-05-28

    摘要: The present invention, pertaining to the field of speech recognition, discloses a reminder setting method and apparatus. The method includes: acquiring speech signals; acquiring time information in speech signals by using keyword recognition, and determining reminder time for reminder setting according to the time information; acquiring text sequence corresponding to the speech signals by using continuous speech recognition, and determining reminder content for reminder setting according to the time information and the text sequence; and setting a reminder according to the reminder time and the reminder content. According to the present invention, acquiring time information in speech signals by using keyword recognition ensures correctness of time information extraction, and achieves an effect that correct time information is still acquired by keyword recognition to set a reminder even in the case that a recognized text sequence is incorrect due to poor precision in whole text recognition in the speech recognition.

    SYSTEMS AND METHODS FOR AUDIO COMMAND RECOGNITION
    5.
    发明申请
    SYSTEMS AND METHODS FOR AUDIO COMMAND RECOGNITION 有权
    用于音频命令识别的系统和方法

    公开(公告)号:US20160086609A1

    公开(公告)日:2016-03-24

    申请号:US14958606

    申请日:2015-12-03

    摘要: The present application discloses a method, an electronic system and a non-transitory computer readable storage medium for recognizing audio commands in an electronic device. The electronic device obtains audio data based on an audio signal provided by a user and extracts characteristic audio fingerprint features from the audio data. The electronic device further determines whether the corresponding audio signal is generated by an authorized user by comparing the characteristic audio fingerprint features with an audio fingerprint model for the authorized user and with a universal background model that represents user-independent audio fingerprint features, respectively. When the corresponding audio signal is generated by the authorized user of the electronic device, an audio command is extracted from the audio data, and an operation is performed according to the audio command.

    摘要翻译: 本申请公开了一种用于识别电子设备中的音频命令的方法,电子系统和非暂时性计算机可读存储介质。 电子设备基于由用户提供的音频信号获得音频数据,并从音频数据中提取特征音频指纹特征。 电子设备还通过将特征音频指纹特征与用于授权用户的音频指纹模型进行比较,以及分别表示用户独立的音频指纹特征的通用背景模型来确定对应的音频信号是否由授权用户产生。 当由电子设备的授权用户产生相应的音频信号时,从音频数据中提取音频命令,并根据音频命令进行操作。

    Augmented reality interaction implementation method and system
    6.
    发明授权
    Augmented reality interaction implementation method and system 有权
    增强现实互动实现方法与系统

    公开(公告)号:US09189699B2

    公开(公告)日:2015-11-17

    申请号:US14403115

    申请日:2013-05-17

    摘要: The present disclosure provides a method and system for realizing interaction in augmented reality. The method includes: collecting a frame image and uploads the frame image; recognizing a template image that matches the frame image and returning the template image; detecting a marker area of the frame image according to the template image; and superposing media data corresponding to the template image on the marker area and displaying the superposed image.

    摘要翻译: 本公开提供了一种用于实现增强现实中的交互的方法和系统。 该方法包括:收集帧图像并上传帧图像; 识别与帧图像匹配并返回模板图像的模板图像; 根据模板图像检测帧图像的标记区域; 并将对应于模板图像的媒体数据叠加在标记区域上并显示叠加的图像。

    User authentication method and apparatus based on audio and video data
    7.
    发明授权
    User authentication method and apparatus based on audio and video data 有权
    基于音频和视频数据的用户认证方法和设备

    公开(公告)号:US09177131B2

    公开(公告)日:2015-11-03

    申请号:US14262665

    申请日:2014-04-25

    IPC分类号: H04L29/06 G06F21/32

    CPC分类号: G06F21/32 G06F2221/2117

    摘要: A computer-implemented method is performed at a server having one or more processors and memory storing programs executed by the one or more processors for authenticating a user from video and audio data. The method includes: receiving a login request from a mobile device, the login request including video data and audio data; extracting a group of facial features from the video data; extracting a group of audio features from the audio data and recognizing a sequence of words in the audio data; identifying a first user account whose respective facial features match the group of facial features and a second user account whose respective audio features match the group of audio features. If the first user account is the same as the second user account, retrieve the sequence of words associated with the user account and compare the sequences of words for authentication purpose.

    摘要翻译: 在具有一个或多个处理器的服务器和由一个或多个处理器执行的用于从视频和音频数据认证用户的存储器存储程序的服务器执行计算机实现的方法。 该方法包括:从移动设备接收登录请求,登录请求包括视频数据和音频数据; 从视频数据中提取一组面部特征; 从音频数据提取一组音频特征并识别音频数据中的单词序列; 识别其各自的面部特征与该组面部特征相匹配的第一用户帐户和其各个音频特征与该组音频特征相匹配的第二用户帐户。 如果第一个用户帐户与第二个用户帐户相同,则检索与用户帐户相关联的单词序列,并比较用于验证目的的单词序列。

    INTERACTING METHOD, APPARATUS AND SERVER BASED ON IMAGE
    8.
    发明申请
    INTERACTING METHOD, APPARATUS AND SERVER BASED ON IMAGE 审中-公开
    基于图像的交互方式,装置和服务器

    公开(公告)号:US20150169527A1

    公开(公告)日:2015-06-18

    申请号:US14410875

    申请日:2013-06-26

    IPC分类号: G06F17/24 G06K9/00

    摘要: An interactive method, apparatus based on an image and a server are provided according to embodiments of the present invention. The method includes: recognizing a face region in an image; generating a face box corresponding to the face region; generating a label box corresponding to the face box; and representing label information corresponding to the face region in the label box by performing one of the following modes: obtaining the label information corresponding to the face region from a server, representing the label information obtained from the server in the label box; and receiving the label information corresponding to the face region inputted by a user, representing the label information inputted by the user in the label box. Thus, based on the label information provided by the server or the user, information associated with a circled region is customized (e.g., reviews information), and is further pushed to an associated friend. Thus, interaction between a user pushing the face region and the associated friend is improved.

    摘要翻译: 根据本发明的实施例提供交互方法,基于图像和服务器的装置。 该方法包括:识别图像中的面部区域; 生成与所述面部区域对应的面盒; 生成对应于面箱的标签盒; 并且通过执行以下模式之一来表示与所述标签框中的所述面部区域相对应的标签信息:从服务器获取与所述面部区域相对应的标签信息,表示从所述标签框中的服务器获得的标签信息; 并且接收与由用户输入的面部区域对应的标签信息,表示用户在标签框中输入的标签信息。 因此,基于由服务器或用户提供的标签信息,定制与圆圈区域相关联的信息(例如,评论信息),并且进一步推送到相关联的朋友。 因此,促进面部区域的用户和相关联的朋友之间的交互被改善。

    Method and device for voiceprint recognition

    公开(公告)号:US09940935B2

    公开(公告)日:2018-04-10

    申请号:US15240696

    申请日:2016-08-18

    摘要: A method is performed at a device having one or more processors and memory. The device establishes a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data. The device establishes a second-level DNN model by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, wherein the second-level DNN model specifies a plurality of high-level voiceprint features. Using the second-level DNN model, registers a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user. The device performs speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user.

    Method and apparatus for performing speech keyword retrieval
    10.
    发明授权
    Method and apparatus for performing speech keyword retrieval 有权
    执行语音关键词检索的方法和装置

    公开(公告)号:US09355637B2

    公开(公告)日:2016-05-31

    申请号:US14620000

    申请日:2015-02-11

    摘要: A method and an apparatus are provided for retrieving keyword. The apparatus configures at least two types of language models in a model file, where each type of language model includes a recognition model and a corresponding decoding model; the apparatus extracts a speech feature from the to-be-processed speech data; performs language matching on the extracted speech feature by using recognition models in the model file one by one, and determines a recognition model based on a language matching rate; and determines a decoding model corresponding to the recognition model; decoding the extracted speech feature by using the determined decoding model, and obtains a word recognition result after the decoding; and matches a keyword in a keyword dictionary and the word recognition result, and outputs a matched keyword.

    摘要翻译: 提供了一种用于检索关键字的方法和装置。 该装置在模型文件中配置至少两种类型的语言模型,其中每种类型的语言模型包括识别模型和相应的解码模型; 该设备从待处理语音数据中提取语音特征; 通过在模型文件中逐一使用识别模型对提取出的语音特征进行语言匹配,并根据语言匹配率确定识别模型; 并确定与识别模型相对应的解码模型; 通过使用所确定的解码模型来解码所提取的语音特征,并且在解码之后获得字识别结果; 并且将关键词字典中的关键字与单词识别结果进行匹配,并输出匹配的关键字。