Invention Application
- Patent Title: Multi-scale Transformer for Image Analysis
-
Application No.: US18999336Application Date: 2024-12-23
-
Publication No.: US20250124537A1Publication Date: 2025-04-17
- Inventor: Junjie Ke , Feng Yang , Qifei Wang , Yilin Wang , Peyman Milanfar
- Applicant: Google LLC
- Applicant Address: US CA Mountain View
- Assignee: Google LLC
- Current Assignee: Google LLC
- Current Assignee Address: US CA Mountain View
- Main IPC: G06T3/04
- IPC: G06T3/04 ; G06T3/40 ; G06T7/00

Abstract:
The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
Information query