Content filtering in media playing devices

    公开(公告)号:US12126868B2

    公开(公告)日:2024-10-22

    申请号:US18348249

    申请日:2023-07-06

    Abstract: Various approaches relate to user defined content filtering in media playing devices of undesirable content represented in stored and real-time content from content providers. For example, video, image, and/or audio data can be analyzed to identify and classify content included in the data using various classification models and object and text recognition approaches. Thereafter, the identification and classification can be used to control presentation and/or access to the content and/or portions of the content. For example, based on the classification, portions of the content can be modified (e.g., replaced, removed, degraded, etc.) using one or more techniques (e.g., media replacement, media removal, media degradation, etc.) and then presented.

    Automatic learning of entities, words, pronunciations, and parts of speech

    公开(公告)号:US12080275B2

    公开(公告)日:2024-09-03

    申请号:US17146239

    申请日:2021-01-11

    Inventor: Anton V. Relin

    CPC classification number: G10L15/02 G10L15/14 G10L15/19 G10L2015/025

    Abstract: Systems for automatic speech recognition and/or natural language understanding automatically learn new words by finding subsequences of phonemes that, if they were a new word, would enable a successful tokenization of a phoneme sequence. Systems can learn alternate pronunciations of words by finding phoneme sequences with a small edit distance to existing pronunciations. Systems can learn the part of speech of words by finding part-of-speech variations that would enable parses by syntactic grammars. Systems can learn what types of entities a word describes by finding sentences that could be parsed by a semantic grammar but for the words not being on an entity list.

    DOMAIN SPECIFIC NEURAL SENTENCE GENERATOR FOR MULTI-DOMAIN VIRTUAL ASSISTANTS

    公开(公告)号:US20240144921A1

    公开(公告)日:2024-05-02

    申请号:US18050182

    申请日:2022-10-27

    Abstract: Automatically generating sentences that a user can say to invoke a set of defined actions performed by a virtual assistant are disclosed. A sentence is received and keywords are extracted from the sentence. Based on the keywords, additional sentences are generated. A classifier model is applied to the generated sentences to determine a sentence that satisfies a threshold. In the situation a sentence satisfies the threshold, an intent associated with the classifier model can be invoked. In the situation the sentences fail to satisfy the classifier model, the virtual assistant can attempt to interpret the received sentence according to the most likely intent by invoking a sentence generation model fine-tuned for a particular domain, generate additional sentences with a high probability of having the same intent and fulfill the specific action defined by the intent.

    METHOD AND SYSTEM FOR PROACTIVE INTERACTION
    7.
    发明公开

    公开(公告)号:US20240046923A1

    公开(公告)日:2024-02-08

    申请号:US18361791

    申请日:2023-07-28

    Inventor: Masaki NAITO

    CPC classification number: G10L15/19 G06F16/245 G10L15/22 G10L15/30

    Abstract: In an interaction system, a server can obtain a setting expression including a query and a condition for functioning as a virtual assistant, store the query and the condition in a memory, and deliver an inquiry expression including the query in response to occurrence of a situation specified by the condition. The setting expression can be by voice or natural language. Processes can be different for different users and can be based on domain. The inquiry expression includes a question asking the user for an affirmative response before performing the inquiry. Implementations can be adopted in or near a vehicle.

    PRE-WAKEWORD SPEECH PROCESSING
    8.
    发明公开

    公开(公告)号:US20230386458A1

    公开(公告)日:2023-11-30

    申请号:US17804544

    申请日:2022-05-27

    CPC classification number: G10L15/22 G10L15/08 G10L25/93 G10L2015/088

    Abstract: Methods and systems for pre-wakeword speech processing are disclosed. Speech audio, comprising command speech spoken before a wakeword, may be stored in a buffer in oldest to newest order. Upon detection of the wakeword, reverse acoustic models and language models, such as reverse automatic speech recognition (R-ASR) can be applied to the buffered audio, in newest to oldest order, starting from before the wakeword. The speech is converted into a sequence of words. Natural language grammar models, such as natural language understanding (NLU), can be applied to match the sequence of words to a complete command, the complete command being associated with invoking a computer operation.

    Training a device specific acoustic model

    公开(公告)号:US11830472B2

    公开(公告)日:2023-11-28

    申请号:US17573551

    申请日:2022-01-11

    CPC classification number: G10L15/22 G06F3/167 G10L15/18

    Abstract: Developers can configure custom acoustic models by providing audio files with custom recordings. The custom acoustic model is trained by tuning a baseline model using the audio files. Audio files may contain custom noise to apply to clean speech for training. The custom acoustic model is provided as an alternative to a standard acoustic model. Device developers can select an acoustic model by a user interface. Speech recognition is performed on speech audio using one or more acoustic models. The result can be provided to developers through the user interface, and an error rate can be computed and also provided.

    VIDEO CONFERENCE CAPTIONING
    10.
    发明公开

    公开(公告)号:US20230245661A1

    公开(公告)日:2023-08-03

    申请号:US18298282

    申请日:2023-04-10

    Inventor: Ethan COEYTAUX

    CPC classification number: G10L15/26 G10L15/02 G10L19/005 G10L15/19 G10L15/14

    Abstract: A video conferencing system, such as one implemented with a cloud server, receives audio streams from a plurality of endpoints. The system uses automatic speech recognition to transcribe speech in the audio streams. The system multiplexes the transcriptions into individual caption streams and sends them to the endpoints, but the caption stream to each endpoint omits the transcription of audio from the endpoint. Some systems allow muting of audio through an indication to the system. The system then omits sending the muted audio to other endpoints and also omits sending a transcription of the muted audio to other endpoints.

Patent Agency Ranking