TEXTURE RESIDENCY CHECKS USING COMPRESSION METADATA

    公开(公告)号:US20190066352A1

    公开(公告)日:2019-02-28

    申请号:US15687108

    申请日:2017-08-25

    Abstract: A pipeline is configured to access a memory that stores a texture block and metadata that encodes compression parameters of the texture block and a residency status of the texture block. A processor requests access to the metadata in conjunction with requesting data in the texture block to perform a shading operation. The pipeline selectively returns the data in the texture block to the processor depending on whether the metadata indicates that the texture block is resident in the memory. A cache can also be included to store a copy of the metadata that encodes the compression parameters of the texture block. The residency status and the metadata stored in the cache can be modified in response to requests to access the metadata stored in the cache.

    COMPRESSING TEXTURE DATA ON A PER-CHANNEL BASIS

    公开(公告)号:US20240273767A1

    公开(公告)日:2024-08-15

    申请号:US18434185

    申请日:2024-02-06

    CPC classification number: G06T9/00 G06T1/60 G06T2200/04

    Abstract: Sampling circuitry independently accesses channels of texture data that represent a set of pixels. One or more processing units separately compress the channels of the texture data and store compressed data representative of the channels of the texture data for the set of pixels. The channels can include a red channel, a blue channel, and a green channel that represent color values of the set of pixels and an alpha channel that represents degrees of transparency of the set of pixels. Storing the compressed data can include writing the compress data to portions of a cache. The processing units can identify a subset of the set of pixels that share a value of a first channel of the plurality of channels and represent the value of the first channel over the subset of the set of pixels using information representing the value, the first channel, and boundaries of the subset.

    ACTIVE BRIDGE CHIPLET WITH INTEGRATED CACHE

    公开(公告)号:US20210097013A1

    公开(公告)日:2021-04-01

    申请号:US16585452

    申请日:2019-09-27

    Abstract: A chiplet system includes a central processing unit (CPU) communicably coupled to a first GPU chiplet of a GPU chiplet array. The GPU chiplet array includes the first GPU chiplet communicably coupled to the CPU via a bus and a second GPU chiplet communicably coupled to the first GPU chiplet via an active bridge chiplet. The active bridge chiplet is an active silicon die that bridges GPU chiplets and allows partitioning of systems-on-a-chip (SoC) functionality into smaller functional chiplet groupings.

    PROCESSING SYSTEM WITH SELECTIVE PRIORITY-BASED TWO-LEVEL BINNING

    公开(公告)号:US20220156874A1

    公开(公告)日:2022-05-19

    申请号:US17231425

    申请日:2021-04-15

    Abstract: Systems and methods related to priority-based and performance-based selection of a render mode, such as a two-level binning mode, in which to execute workloads with a graphics processing unit (GPU) of a system are provided. A user mode driver (UMD) or kernel mode driver (KMD) executed at a central processing unit (CPU) configures low and medium priority workloads to be executed in a two-level binning mode and selects a binning mode for high priority workloads based on whether performance heuristics indicate that one or more binning conditions or override conditions have been met. High priority workloads are maintained in a high priority queue, while low and medium priority workloads are maintained in a low/medium priority queue, such that execution of low and medium priority workloads at the GPU can be preempted in favor of executing high priority workloads.

    COMPRESSING TEXTURE DATA ON A PER-CHANNEL BASIS

    公开(公告)号:US20220092826A1

    公开(公告)日:2022-03-24

    申请号:US17030048

    申请日:2020-09-23

    Abstract: Sampling circuitry independently accesses channels of texture data that represent a set of pixels. One or more processing units separately compress the channels of the texture data and store compressed data representative of the channels of the texture data for the set of pixels. The channels can include a red channel, a blue channel, and a green channel that represent color values of the set of pixels and an alpha channel that represents degrees of transparency of the set of pixels. Storing the compressed data can include writing the compress data to portions of a cache. The processing units can identify a subset of the set of pixels that share a value of a first channel of the plurality of channels and represent the value of the first channel over the subset of the set of pixels using information representing the value, the first channel, and boundaries of the subset.

    WORKLOAD AWARE VIRTUAL PROCESSING UNITS

    公开(公告)号:US20230024130A1

    公开(公告)日:2023-01-26

    申请号:US17564166

    申请日:2021-12-28

    Abstract: A processing unit is configured differently based on an identified workload, and each configuration of the processing unit is exposed to software (e.g., to a device driver) as a different virtual processing unit. Using these techniques, a processing system is able to provide different configurations of the processing unit to support different types of workloads, thereby conserving system resources. Further, by exposing the different configurations as different virtual processing units, the processing system is able to use existing device drivers or other system infrastructure to implement the different processing unit configurations.

    GRAPHICS PROCESSING UNIT RENDER MODE SELECTION SYSTEM

    公开(公告)号:US20210287418A1

    公开(公告)日:2021-09-16

    申请号:US17008292

    申请日:2020-08-31

    Abstract: A processor dynamically selects a render mode for each render pass of a frame based on the characteristics of the render pass. A software driver of the processor receives graphics operations from an application executing at the processor and converts the graphics operations into a command stream that is provided to the graphics pipeline. As the driver converts the graphics operations into the command stream, the driver analyzes each render pass of the frame to determine characteristics of the render passes, and selects a render mode for each render pass based on the characteristics of the render pass.

    DATA FLOW IN A DISTRIBUTED GRAPHICS PROCESSING UNIT ARCHITECTURE

    公开(公告)号:US20210158599A1

    公开(公告)日:2021-05-27

    申请号:US16698624

    申请日:2019-11-27

    Abstract: An apparatus includes a command buffer configured to temporarily store commands. The apparatus also includes processing units disposed at a substrate. The processing units are configured to access a plurality of copies of a command from the command buffer. The processing units include first processing units (such as fixed function hardware blocks) to perform geometry operations indicated by the command on a set of primitives. The geometry operations are performed concurrently by the first processing units. The processing units also include second processing units (such as shaders) to process mutually exclusive sets of pixels generated by rasterizing the set of primitives. The apparatus also includes a cache to temporarily store the pixels after shading by the shaders. The processing units stop or interrupt processing commands in response to detecting a synchronization point and resume processing the commands in response to all the processing units completing commands before synchronization point.

    FABRICATING ACTIVE-BRIDGE-COUPLED GPU CHIPLETS

    公开(公告)号:US20210098419A1

    公开(公告)日:2021-04-01

    申请号:US16585480

    申请日:2019-09-27

    Abstract: Various multi-die arrangements and methods of manufacturing the same are disclosed. In some embodiments, a method of manufacture includes a face-to-face process in which a first GPU chiplet and a second GPU chiplet are bonded to a temporary carrier wafer. A face surface of an active bridge chiplet is bonded to a face surface of the first and second GPU chiplets before mounting the GPU chiplets to a carrier substrate. In other embodiments, a method of manufacture includes a face-to-back process in which a face surface of an active bridge chiplet is bonded to a back surface of the first and second GPU chiplets.

Patent Agency Ranking