Multi-scale transformer for image analysis

    公开(公告)号:US12217382B2

    公开(公告)日:2025-02-04

    申请号:US18527528

    申请日:2023-12-04

    Applicant: Google LLC

    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.

    Multi-Axis Vision Transformer
    3.
    发明申请

    公开(公告)号:US20250022269A1

    公开(公告)日:2025-01-16

    申请号:US18902546

    申请日:2024-09-30

    Applicant: Google LLC

    Abstract: Provided is an efficient and scalable attention model that can be referred to as multi-axis attention. Example implementations can include two aspects: blocked local and dilated global attention. These design choices allow global-local spatial interactions on arbitrary input resolutions with only linear complexity. The present disclosure also presents a new architectural element by effectively blending the proposed multi-axis attention model with convolutions. In addition, the present disclosure proposes a simple hierarchical vision backbone, example implementations of which can be referred to as MaxViT, by simply repeating the basic building block over multiple stages. Notably, MaxViT is able to “see” globally throughout the entire network, even in earlier, high-resolution stages.

    Deep palette prediction
    4.
    发明授权

    公开(公告)号:US12198229B2

    公开(公告)日:2025-01-14

    申请号:US17782727

    申请日:2020-01-08

    Applicant: GOOGLE LLC

    Abstract: Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.

    Multi-scale Transformer for Image Analysis
    5.
    发明公开

    公开(公告)号:US20240119555A1

    公开(公告)日:2024-04-11

    申请号:US18527528

    申请日:2023-12-04

    Applicant: Google LLC

    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.

    ZOOM AGNOSTIC WATERMARK EXTRACTION
    6.
    发明公开

    公开(公告)号:US20230325959A1

    公开(公告)日:2023-10-12

    申请号:US17926213

    申请日:2021-06-21

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting and decoding a visually imperceptible or perceptible watermark. A watermark detection apparatus determines whether the particular image includes a visually imperceptible or perceptible watermark using detector a machine learning model. If the watermark detection apparatus detects a watermark, the particular image is routed to a watermark decoder. If the watermark detection apparatus cannot detect a watermark in the particular image, the particular image is filtered from further processing. The watermark decoder decodes the visually imperceptible or perceptible watermark detected in the particular image. After decoding, an item depicted in the particular image is validated based data extracted from the decoded visually imperceptible or perceptible watermark.

    IMAGE WATERMARKING
    7.
    发明申请

    公开(公告)号:US20230111326A1

    公开(公告)日:2023-04-13

    申请号:US17792062

    申请日:2020-01-13

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and computer programs encoded on a computer storage medium, that relate to extracting digital watermarks from images, irrespective of distortions introduced into these images. Methods can include inputting a first data item into a channel encoder that can generate a first encoded data item that is greater in length than the first data item and that (1) includes the input data item and (2) new data this is redundant of the input data item. Based on the first encoded data item and a first image, an encoder model can generate a first encoded image into which the first encoded data is embedded as a digital watermark. A decoder model can decode the first encoded data item to generate a second data, which can be decoded by the channel decoder to generate data that is predicted to be the first data.

    Systems and Techniques for Retraining Models for Video Quality Assessment and for Transcoding Using the Retrained Models

    公开(公告)号:US20220415039A1

    公开(公告)日:2022-12-29

    申请号:US17762289

    申请日:2019-11-26

    Applicant: Google LLC

    Abstract: A trained model is retrained for video quality assessment and used to identify sets of adaptive compression parameters for transcoding user generated video content. Using transfer learning, the model, which is initially trained for image object detection, is retrained for technical content assessment and then again retrained for video quality assessment. The model is then deployed into a transcoding pipeline and used for transcoding an input video stream of user generated content. The transcoding pipeline may be structured in one of several ways. In one example, a secondary pathway for video content analysis using the model is introduced into the pipeline, which does not interfere with the ultimate output of the transcoding should there be a network or other issue. In another example, the model is introduced as a library within the existing pipeline, which would maintain a single pathway, but ultimately is not expected to introduce significant latency.

    Image watermarking
    9.
    发明授权

    公开(公告)号:US12190403B2

    公开(公告)日:2025-01-07

    申请号:US17792062

    申请日:2020-01-13

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and computer programs encoded on a computer storage medium, that relate to extracting digital watermarks from images, irrespective of distortions introduced into these images. Methods can include inputting a first data item into a channel encoder that can generate a first encoded data item that is greater in length than the first data item and that (1) includes the input data item and (2) new data this is redundant of the input data item. Based on the first encoded data item and a first image, an encoder model can generate a first encoded image into which the first encoded data is embedded as a digital watermark. A decoder model can decode the first encoded data item to generate a second data, which can be decoded by the channel decoder to generate data that is predicted to be the first data.

    EVALUATING VISUAL QUALITY OF DIGITAL CONTENT
    10.
    发明公开

    公开(公告)号:US20240346546A1

    公开(公告)日:2024-10-17

    申请号:US18584716

    申请日:2024-02-22

    Applicant: Google LLC

    CPC classification number: G06Q30/0244

    Abstract: Systems, devices, methods, and computer readable medium for evaluating visual quality of digital content are disclosed. Methods can include identifying content assets including one or more images that are combined to create different digital components distributed to one or more client devices. A quality of each of the one or more images is evaluated using one or more machine learning models trained to evaluate one or more visual aspects that are deemed indicative of visual quality. An aggregate quality for the content assets is determined based, at least in part, on an output of the one or more machine learning models indicating the visual quality of each of the one or more images. A graphical user interface of a first computing device is updated to present a visual indication of the aggregate quality of the content assets.

Patent Agency Ranking