Generating videos
    1.
    发明授权

    公开(公告)号:US11915724B2

    公开(公告)日:2024-02-27

    申请号:US17423623

    申请日:2020-06-22

    Applicant: Google LLC

    CPC classification number: G11B27/031 G06T7/20 G06V20/41 G06T2207/10016

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.

    GENERATING VIDEOS
    2.
    发明申请

    公开(公告)号:US20250061922A1

    公开(公告)日:2025-02-20

    申请号:US18937679

    申请日:2024-11-05

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.

    GENERATING VIDEOS
    3.
    发明公开
    GENERATING VIDEOS 审中-公开

    公开(公告)号:US20240161783A1

    公开(公告)日:2024-05-16

    申请号:US18420509

    申请日:2024-01-23

    Applicant: Google LLC

    CPC classification number: G11B27/031 G06T7/20 G06V20/41 G06T2207/10016

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.

    EFFICIENTLY RENDERING VIDEO HAVING DYNAMIC COMPONENTS

    公开(公告)号:US20230058512A1

    公开(公告)日:2023-02-23

    申请号:US17417268

    申请日:2020-05-14

    Applicant: Google LLC

    Abstract: A method for efficient dynamic video rendering is described for certain implementations. The method may include identifying a file for rendering a video comprising one or more static layers and one or more dynamic layers, detecting, based on analyzing one or more fields of the file for rendering a video, the one or more static layers and the one or more dynamic layers, wherein each dynamic layer comprises a comment that indicates a variable component, rendering the one or more static layers of the file, receiving, from a user device, a request for the video that includes user information, determining, based on the user information, variable definitions designated to be inserted into a dynamic layer, rendering the one or more dynamic layers using the variable definitions, and generating a composite video for playback from the rendered one or more static layers and the rendered one or more dynamic layers.

    IMAGE REPLACEMENT INPAINTING
    5.
    发明申请

    公开(公告)号:US20220301118A1

    公开(公告)日:2022-09-22

    申请号:US17641700

    申请日:2020-05-13

    Applicant: GOOGLE LLC

    Abstract: A method for replacing an object in an image. The method may include identifying a first object at a position within a first image, masking, based on the first image and the position of the first object, a target area to produce a masked image, generating, based on the masked image and an inpainting machine learning model, a second image different from the first image, the inpainting machine learning model being trained using a difference between the target area of training images and content of generated images at location corresponding to the target area of the training images, generating, based on the masked image and the second image, a third image, and adding, to the third image, a new object different from the first object.

    Efficiently rendering video having dynamic components

    公开(公告)号:US12236514B2

    公开(公告)日:2025-02-25

    申请号:US17417268

    申请日:2020-05-14

    Applicant: Google LLC

    Abstract: A method for efficient dynamic video rendering is described for certain implementations. The method may include identifying a file for rendering a video comprising one or more static layers and one or more dynamic layers, detecting, based on analyzing one or more fields of the file for rendering a video, the one or more static layers and the one or more dynamic layers, wherein each dynamic layer comprises a comment that indicates a variable component, rendering the one or more static layers of the file, receiving, from a user device, a request for the video that includes user information, determining, based on the user information, variable definitions designated to be inserted into a dynamic layer, rendering the one or more dynamic layers using the variable definitions, and generating a composite video for playback from the rendered one or more static layers and the rendered one or more dynamic layers.

    Generating videos
    7.
    发明授权

    公开(公告)号:US12176006B2

    公开(公告)日:2024-12-24

    申请号:US18420509

    申请日:2024-01-23

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.

    GENERATING VIDEOS
    8.
    发明申请

    公开(公告)号:US20230095856A1

    公开(公告)日:2023-03-30

    申请号:US17423623

    申请日:2020-06-22

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.

Patent Agency Ranking