OBFUSCATING SENSITIVE DATA WHILE PRESERVING DATA USABILITY
    61.
    发明申请
    OBFUSCATING SENSITIVE DATA WHILE PRESERVING DATA USABILITY 审中-公开
    在保存数据可用性的情况下实现敏感数据

    公开(公告)号:US20090132419A1

    公开(公告)日:2009-05-21

    申请号:US11940401

    申请日:2007-11-15

    IPC分类号: H04L9/00

    CPC分类号: G06F21/6245

    摘要: A method and system for obfuscating sensitive data while preserving data usability. The in-scope data files of an application are identified. The in-scope data files include sensitive data that must be masked to preserve its confidentiality. Data definitions are collected. Primary sensitive data fields are identified. Data names for the primary sensitive data fields are normalized. The primary sensitive data fields are classified according to sensitivity. Appropriate masking methods are selected from a pre-defined set to be applied to each data element based on rules exercised on the data. The data being masked is profiled to detect invalid data. Masking software is developed and input considerations are applied. The selected masking method is executed and operational and functional validation is performed.

    摘要翻译: 一种在保护数据可用性的同时对敏感数据进行模糊处理的方法和系统。 识别应用程序的范围内数据文件。 范围内数据文件包括必须被屏蔽以保护其机密性的敏感数据。 收集数据定义。 识别主敏感数据字段。 主要敏感数据字段的数据名称被归一化。 主要敏感数据字段根据灵敏度进行分类。 根据对数据执行的规则,从应用于每个数据元素的预定义集合中选择适当的掩蔽方法。 被屏蔽的数据被分析以检测无效数据。 开发屏蔽软件,并应用输入注意事项。 执行所选择的掩蔽方法,并执行操作和功能验证。

    Collaborative and situationally aware active billboards
    62.
    发明授权
    Collaborative and situationally aware active billboards 有权
    合作和情境感知活跃的广告牌

    公开(公告)号:US07515136B1

    公开(公告)日:2009-04-07

    申请号:US12183651

    申请日:2008-07-31

    IPC分类号: G09G5/00 G06Q30/00 G06F3/00

    CPC分类号: G06Q30/0267 G06Q30/02

    摘要: A method of collaborative interactions with billboards includes receiving a request, by a billboard network manager, from an advertisement company to display synchronized interactive advertisements and video games on billboards in an area, inviting users in the area to participate in the advertisements and video games through the billboards and mobile devices of the users via a network, checking registration of the users to participate in the collaborative interactions with billboards, and checking a volume of the users in the area for appropriate synchronized advertisements and video games. If the volume of users is appropriate for synchronized advertisements and video games, then the billboard network manager displays the synchronized advertisements and video games, and continues to track the volume of the users in the area. If the volume of users is not appropriate for synchronized advertisements and video games, then the billboard network management reconfigures synchronized billboard content for more appropriate advertisements and video games by one of randomly changing a content, and propagating and extending through synchronized billboards a particular content of some billboard section that brought more interest from the users.

    摘要翻译: 一种与广告牌进行协同交互的方法包括:由广告牌网络管理员从广告公司接收请求,以在一个区域的广告牌上显示同步的交互式广告和视频游戏,邀请该地区的用户参与广告和视频游戏,通过 通过网络的用户的广告牌和移动设备,检查用户的注册以参与与广告牌的协作交互,以及检查该区域中的用户的数量以进行适当的同步广告和视频游戏。 如果用户的数量适合于同步广告和视频游戏,则广告牌网络管理器显示同步的广告和视频游戏,并且继续跟踪该区域中的用户的音量。 如果用户数量不适合于同步广告和视频游戏,则广告牌网络管理通过随机改变内容之一重新配置用于更适合的广告和视频游戏的同步广告牌内容,并且通过同步广告牌传播和延伸特定内容 一些广告牌部分给用户带来了更多的兴趣。

    INTERACTIVE DEBUGGING AND TUNING OF METHODS FOR CTTS VOICE BUILDING
    63.
    发明申请
    INTERACTIVE DEBUGGING AND TUNING OF METHODS FOR CTTS VOICE BUILDING 有权
    CTTS语音建筑方法的互动调试和调谐

    公开(公告)号:US20090083037A1

    公开(公告)日:2009-03-26

    申请号:US12327579

    申请日:2008-12-03

    IPC分类号: G10L13/08

    CPC分类号: G10L13/033

    摘要: A method, a system, and an apparatus for identifying and correcting sources of problems in synthesized speech which is generated using a concatenative text-to-speech (CTTS) technique. The method can include the step of displaying a waveform corresponding to synthesized speech generated from concatenated phonetic units. The synthesized speech can be generated from text input received from a user. The method further can include the step of displaying parameters corresponding to at least one of the phonetic units. The method can include the step of displaying the original recordings containing selected phonetic units. An editing input can be received from the user and the parameters can be adjusted in accordance with the editing input.

    摘要翻译: 一种用于识别和校正使用连续文本到语音(CTTS)技术产生的合成语音中的问题源的方法,系统和装置。 该方法可以包括显示对应于从拼接语音单元产生的合成语音的波形的步骤。 可以从从用户接收的文本输入生成合成语音。 该方法还可以包括显示与至少一个语音单元对应的参数的步骤。 该方法可以包括显示包含所选语音单元的原始记录的步骤。 可以从用户接收编辑输入,并且可以根据编辑输入来调整参数。

    Hierarchical Methods and Apparatus for Extracting User Intent from Spoken Utterances
    64.
    发明申请
    Hierarchical Methods and Apparatus for Extracting User Intent from Spoken Utterances 审中-公开
    用于从口语中提取用户意图的分层方法和装置

    公开(公告)号:US20080221903A1

    公开(公告)日:2008-09-11

    申请号:US12125441

    申请日:2008-05-22

    IPC分类号: G10L21/00

    摘要: Improved techniques are disclosed for permitting a user to employ more human-based grammar (i.e., free form or conversational input) while addressing a target system via a voice system. For example, a technique for determining intent associated with a spoken utterance of a user comprises the following steps/operations. Decoded speech uttered by the user is obtained. An intent is then extracted from the decoded speech uttered by the user. The intent is extracted in an iterative manner such that a first class is determined after a first iteration and a sub-class of the first class is determined after a second iteration. The first class and the sub-class of the first class are hierarchically indicative of the intent of the user, e.g., a target and data that may be associated with the target. The multi-stage intent extraction approach may have more than two iterations. By way of example only, the user intent extracting step may further determine a sub-class of the sub-class of the first class after a third iteration, such that the first class, the sub-class of the first class, and the sub-class of the sub-class of the first class are hierarchically indicative of the intent of the user.

    摘要翻译: 公开了改进的技术,以允许用户在经由语音系统寻址目标系统的同时采用更多的基于人的语法(即,自由形式或会话输入)。 例如,用于确定与用户的口语发音相关联的意图的技术包括以下步骤/操作。 获得用户发出的解码的语音。 然后从用户发出的解码语音中提取意图。 以迭代方式提取意图,使得在第一次迭代之后确定第一类并且在第二次迭代之后确定第一类的子类。 第一类的第一类和子类分层地指示用户的意图,例如可以与目标相关联的目标和数据。 多级意图提取方法可能有两次以上的迭代。 仅作为示例,用户意图提取步骤还可以在第三次迭代之后确定第一类的子类的子类,使得第一类,第一类的子类和子类 第一类的子类的类被分层地指示用户的意图。

    Methods and apparatus for use in speech recognition systems for identifying unknown words and for adding previously unknown words to vocabularies and grammars of speech recognition systems
    66.
    发明申请
    Methods and apparatus for use in speech recognition systems for identifying unknown words and for adding previously unknown words to vocabularies and grammars of speech recognition systems 审中-公开
    在语音识别系统中用于识别未知单词并将以前未知的单词添加到语音识别系统的词汇和语法中的方法和装置

    公开(公告)号:US20070124147A1

    公开(公告)日:2007-05-31

    申请号:US11291231

    申请日:2005-11-30

    IPC分类号: G10L15/18

    摘要: The present invention concerns methods and apparatus for identifying and assigning meaning to words not recognized by a vocabulary or grammar of a speech recognition system. In an embodiment of the invention, the word may be in an acoustic vocabulary of the speech recognition system, but may be unrecognized by an embedded grammar of a language model of the speech recognition system. In another embodiment of the invention, the word may not be recognized by any vocabulary associated with the speech recognition system. In embodiments of the invention, at least one hypothesis is generated for an utterance not recognized by the speech recognition system. If the at least one hypothesis meets at least one predetermined criterion, a word or more corresponding to the at least one hypothesis is added to the vocabulary of the speech recognition system. In other embodiments of the invention, before adding the word to the vocabulary of the speech recognition system, the at least one hypothesis may be presented to the user of the speech recognition system to determine if that is what the used intended when the user spoke.

    摘要翻译: 本发明涉及用于识别和分配对语音识别系统的词汇或语法不被识别的词语的含义的方法和装置。 在本发明的一个实施例中,该词可以在语音识别系统的声学词汇中,但是可能由语音识别系统的语言模型的嵌入语法无法识别。 在本发明的另一个实施例中,该词可能不被与语音识别系统相关联的任何词汇识别。 在本发明的实施例中,为语音识别系统未识别的话语生成至少一个假设。 如果所述至少一个假设满足至少一个预定标准,则将与所述至少一个假设相对应的一个或多个词添加到所述语音识别系统的词汇表中。 在本发明的其他实施例中,在将单词添加到语音识别系统的词汇表之前,可以将该至少一个假设呈现给语音识别系统的用户,以确定当用户说话时所使用的意图是什么。

    Hierarchical methods and apparatus for extracting user intent from spoken utterances
    67.
    发明申请
    Hierarchical methods and apparatus for extracting user intent from spoken utterances 有权
    用于从口语中提取用户意图的分层方法和装置

    公开(公告)号:US20070055529A1

    公开(公告)日:2007-03-08

    申请号:US11216483

    申请日:2005-08-31

    IPC分类号: G10L21/00

    摘要: Improved techniques are disclosed for permitting a user to employ more human-based grammar (i.e., free form or conversational input) while addressing a target system via a voice system. For example, a technique for determining intent associated with a spoken utterance of a user comprises the following steps/operations. Decoded speech uttered by the user is obtained. An intent is then extracted from the decoded speech uttered by the user. The intent is extracted in an iterative manner such that a first class is determined after a first iteration and a sub-class of the first class is determined after a second iteration. The first class and the sub-class of the first class are hierarchically indicative of the intent of the user, e.g., a target and data that may be associated with the target. The multi-stage intent extraction approach may have more than two iterations. By way of example only, the user intent extracting step may further determine a sub-class of the sub-class of the first class after a third iteration, such that the first class, the sub-class of the first class, and the sub-class of the sub-class of the first class are hierarchically indicative of the intent of the user.

    摘要翻译: 公开了改进的技术,以允许用户在经由语音系统寻址目标系统的同时采用更多的基于人的语法(即,自由形式或会话输入)。 例如,用于确定与用户的口语发音相关联的意图的技术包括以下步骤/操作。 获得用户发出的解码的语音。 然后从用户发出的解码语音中提取意图。 以迭代方式提取意图,使得在第一次迭代之后确定第一类并且在第二次迭代之后确定第一类的子类。 第一类的第一类和子类分层地指示用户的意图,例如可以与目标相关联的目标和数据。 多级意图提取方法可能有两次以上的迭代。 仅作为示例,用户意图提取步骤还可以在第三次迭代之后确定第一类的子类的子类,使得第一类,第一类的子类和子类 第一类的子类的类被分层地指示用户的意图。

    Methods and apparatus for buffering data for use in accordance with a speech recognition system
    68.
    发明申请
    Methods and apparatus for buffering data for use in accordance with a speech recognition system 有权
    用于根据语音识别系统缓冲数据的方法和装置

    公开(公告)号:US20070043563A1

    公开(公告)日:2007-02-22

    申请号:US11209004

    申请日:2005-08-22

    IPC分类号: G10L15/06

    CPC分类号: G10L15/28

    摘要: Techniques are disclosed for overcoming errors in speech recognition systems. For example, a technique for processing acoustic data in accordance with a speech recognition system comprises the following steps/operations. Acoustic data is obtained in association with the speech recognition system. The acoustic data is recorded using a combination of a first buffer area and a second buffer area, such that the recording of the acoustic data using the combination of the two buffer areas at least substantially minimizes one or more truncation errors associated with operation of the speech recognition system.

    摘要翻译: 公开了用于克服语音识别系统中的错误的技术。 例如,根据语音识别系统处理声学数据的技术包括以下步骤/操作。 与语音识别系统相关联地获得声学数据。 使用第一缓冲区域和第二缓冲区域的组合记录声学数据,使得使用两个缓冲区域的组合的声学数据的记录至少基本上最小化与语音操作相关联的一个或多个截断误差 识别系统。

    Touch gesture based interface for motor vehicle
    69.
    发明申请
    Touch gesture based interface for motor vehicle 有权
    触摸手势界面的汽车

    公开(公告)号:US20060047386A1

    公开(公告)日:2006-03-02

    申请号:US10930225

    申请日:2004-08-31

    IPC分类号: G06F17/00

    摘要: An improved apparatus and method is provided for operating devices and systems in a motor vehicle, while at the same time reducing vehicle operator distractions. One or more touch sensitive pads are mounted on the steering wheel of the motor vehicle, and the vehicle operator touches the pads in a pre-specified synchronized pattern, to perform functions such as controlling operation of the radio or adjusting a window. At least some of the touch patterns used to generate different commands may be selected by the vehicle operator. Usefully, the system of touch pad sensors and the signals generated thereby are integrated with speech recognition and/or facial gesture recognition systems, so that commands may be generated by synchronized multi-mode inputs.

    摘要翻译: 提供了一种用于在机动车辆中操作装置和系统的改进的装置和方法,同时减少了车辆操作者的干扰。 一个或多个触敏垫安装在机动车辆的方向盘上,并且车辆操作者以预先指定的同步模式接触垫,以执行诸如控制无线电操作或调整窗口的功能。 用于产生不同命令的至少一些触摸图案可以由车辆操作者选择。 有趣的是,触摸板传感器的系统和由此产生的信号与语音识别和/或面部手势识别系统集成,使得命令可以由同步的多模式输入生成。

    Fusion of audio and video based speaker identification for multimedia information access
    70.
    发明授权
    Fusion of audio and video based speaker identification for multimedia information access 有权
    融合基于音频和视频的扬声器识别,用于多媒体信息访问

    公开(公告)号:US06567775B1

    公开(公告)日:2003-05-20

    申请号:US09558371

    申请日:2000-04-26

    IPC分类号: G01L1500

    摘要: A method and apparatus are disclosed for identifying a speaker in an audio-video source using both audio and video information. An audio-based speaker identification system identifies one or more potential speakers for a given segment using an enrolled speaker database. A video-based speaker identification system identifies one or more potential speakers for a given segment using a face detector/recognizer and an enrolled face database. An audio-video decision fusion process evaluates the individuals identified by the audio-based and video-based speaker identification systems and determines the speaker of an utterance in accordance with the present invention. A linear variation is imposed on the ranked-lists produced using the audio and video information. The decision fusion scheme of the present invention is based on a linear combination of the audio and the video ranked-lists. The line with the higher slope is assumed to convey more discriminative information. The normalized slopes of the two lines are used as the weight of the respective results when combining the scores from the audio-based and video-based speaker analysis. In this manner, the weights are derived from the data itself.

    摘要翻译: 公开了一种使用音频和视频信息来识别音频 - 视频源中的扬声器的方法和装置。 基于音频的扬声器识别系统使用已登记的扬声器数据库识别用于给定段的一个或多个潜在扬声器。 基于视频的扬声器识别系统使用面部检测器/识别器和登记的面部数据库识别用于给定片段的一个或多个潜在扬声器。 音频 - 视频决策融合过程评估由基于音频和视频的扬声器识别系统识别的个人,并且根据本发明确定说话者的话语。 对使用音频和视频信息产生的排名列表施加线性变化。 本发明的决策融合方案基于音频和视频排名列表的线性组合。 假定具有较高斜率的线条传达更多的歧视性信息。 当组合来自基于音频和视频的说话人分析的分数时,两条线的归一化斜率被用作相应结果的权重。 以这种方式,权重是从数据本身导出的。