AUDIO-VISUAL SPEECH SEPARATION
    3.
    发明申请

    公开(公告)号:US20230122905A1

    公开(公告)日:2023-04-20

    申请号:US17951002

    申请日:2022-09-22

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

    Audio-visual speech separation
    6.
    发明授权

    公开(公告)号:US11456005B2

    公开(公告)日:2022-09-27

    申请号:US16761707

    申请日:2018-11-21

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

    AUDIO-VISUAL SPEECH SEPARATION
    7.
    发明申请

    公开(公告)号:US20200335121A1

    公开(公告)日:2020-10-22

    申请号:US16761707

    申请日:2018-11-21

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

    Text-Based Real Image Editing with Diffusion Models

    公开(公告)号:US20240355017A1

    公开(公告)日:2024-10-24

    申请号:US18302508

    申请日:2023-04-18

    Applicant: Google LLC

    CPC classification number: G06T11/60 G06T3/4053

    Abstract: Methods and systems for editing an image are disclosed herein. The method includes receiving an input image and a target text, the target text indicating a desired edit for the input image and obtaining, by the computing system, a target text embedding based on the target text. The method also includes obtaining, by the computing system, an optimized text embedding based on the target text embedding and the input image and fine-tuning, by the computing system, a diffusion model based on the optimized text embedding. The method can further include interpolating, by the computing system, the target text embedding and the optimized text embedding to obtain an interpolated embedding and generating, by the computing system, an edited image including the desired edit using the diffusion model based on the input image and the interpolated embedding.

Patent Agency Ranking