Automated voice translation dubbing for prerecorded video

    公开(公告)号:US11582527B2

    公开(公告)日:2023-02-14

    申请号:US16975696

    申请日:2018-02-26

    Applicant: GOOGLE LLC

    Abstract: A method for aligning a translation of original caption data with an audio portion of a video is provided. The method includes identifying, by a processing device, original caption data for a video that includes a plurality of caption character strings. The processing device identifies speech recognition data that includes a plurality of generated character strings and associated timing information for each generated character string. The processing device maps the plurality of caption character strings to the plurality of generated character strings using assigned values indicative of semantic similarities between character strings. The processing device assigns timing information to the individual caption character strings based on timing information of mapped individual generated character strings. The processing device aligns a translation of the original caption data with the audio portion of the video using assigned timing information of the individual caption character strings.

    Automated voice translation dubbing for prerecorded videos

    公开(公告)号:US12114048B2

    公开(公告)日:2024-10-08

    申请号:US18109243

    申请日:2023-02-13

    Applicant: Google LLC

    CPC classification number: H04N21/4884 G06F40/30 G06F40/58 H04N21/43074

    Abstract: A method for aligning a translation of original caption data with an audio portion of a video is provided. The method involves identifying original caption data for the video that includes caption character strings, identifying translated language caption data for the video that includes translated character strings associated with audio portion of the video, and mapping caption sentence fragments generated from the caption character strings to corresponding translated sentence fragments generated from the translated character strings based on timing associated with the original caption data and the translated language caption data. The method further involves estimating time intervals for individual caption sentence fragments using timing information corresponding to individual caption character strings, assigning time intervals to individual translated sentence fragments based on estimated time intervals of the individual caption sentence fragments, generating a set of translated sentences using consecutive translated sentence fragments, and aligning the set of translated sentences with the audio portion of the video using assigned time intervals of individual translated sentence fragments from corresponding translated sentences.

Patent Agency Ranking