VIRTUAL CONVERSATIONAL COMPANION
    1.
    发明申请

    公开(公告)号:US20250157463A1

    公开(公告)日:2025-05-15

    申请号:US19017979

    申请日:2025-01-13

    Abstract: Techniques for rendering visual content, in response to one or more utterances, are described. A device receives one or more utterances that define a parameter(s) for desired output content. A system (or the device) identifies natural language data corresponding to the desired content, and uses natural language generation processes to update the natural language data based on the parameter(s). The system (or the device) then generates an image based on the updated natural language data. The system (or the device) also generates video data of an avatar. The device displays the image and the avatar, and synchronizes movements of the avatar with output of synthesized speech of the updated natural language data. The device may also display subtitles of the updated natural language data, and cause a word of the subtitles to be emphasized when synthesized speech of the word is being output.

    Voice user interface for nested content

    公开(公告)号:US11410638B1

    公开(公告)日:2022-08-09

    申请号:US15690810

    申请日:2017-08-30

    Inventor: Eshan Bhatnagar

    Abstract: Methods and systems for causing a voice-activated electronic device to identify that a step of a series of steps can begin while a previous step is ongoing. In some embodiments, a first step will have a waiting period. The methods and systems, in some embodiments, identify this waiting period and determine that a second step can begin during the waiting period of step one. In some embodiments, nested sets of sequential steps are identified within the series of steps. The nested sets of sequential steps, in some embodiments, can be called upon.

    Virtual conversational companion
    3.
    发明授权

    公开(公告)号:US12205577B1

    公开(公告)日:2025-01-21

    申请号:US17217031

    申请日:2021-03-30

    Abstract: Techniques for rendering visual content, in response to one or more utterances, are described. A device receives one or more utterances that define a parameter(s) for desired output content. A system (or the device) identifies natural language data corresponding to the desired content, and uses natural language generation processes to update the natural language data based on the parameter(s). The system (or the device) then generates an image based on the updated natural language data. The system (or the device) also generates video data of an avatar. The device displays the image and the avatar, and synchronizes movements of the avatar with output of synthesized speech of the updated natural language data. The device may also display subtitles of the updated natural language data, and cause a word of the subtitles to be emphasized when synthesized speech of the word is being output.

    Parallelization of instruction steps

    公开(公告)号:US10593319B1

    公开(公告)日:2020-03-17

    申请号:US15634136

    申请日:2017-06-27

    Abstract: Described are techniques for providing steps of an instruction group in an order easily performable by a user operating a voice user interface. A system receives a command from a user to output an instruction group. The system obtains the instruction group and processes the instruction group to determine steps within the instruction group that may be performed in parallel by one or more users. Such determination may involve, for example, determining conditional words or phrases such as “meanwhile,” “while you are,” etc. within the instruction group; determining a number of users performing the instruction group; or determine a type of user performing the instruction group. Once the steps that may be performed in parallel are determined, the system generates a prompt to the user indicating the steps may be performed in parallel, and optionally requesting user instruction regarding an order in which the user wants to perform the steps.

Patent Agency Ranking