摘要:
Prescient cache management methods and systems are disclosed. In one embodiment, a local cache that operates within a raster engine operations stage of a graphics rendering pipeline is managed by following a number of caching decisions related to a number of cached tiles. Each of these cached tiles has a certain priority to remain in the local cache, with the priority corresponding to a conflict type received from a buffer operating within a pre-raster engine operations stage of the graphics rendering pipeline.
摘要:
Prescient cache management methods and systems are disclosed. In one embodiment, within a pre-raster engine operations stage in a graphics rendering pipeline, tile entries are stored in a buffer. Each of these tile entries is related a transaction request that enters the pre-raster engine operations stage and has a screen coordinates field and a conflict field. If this buffer includes a first tile entry, which is related to a first transaction request associated with a first tile, and a second tile entry, which is related to a second transaction request that enters the pre-raster engine operations stage after the first transaction request and is also associated with the first tile, the conflict field of the first tile entry is updated with a conflict type that reflects a number of tile entries between the first tile entry and the second tile entry.
摘要:
One embodiment of the present invention sets forth a compression status cache configured to store compression information for blocks of memory stored within an external memory. A data cache unit is configured to request, in response to a cache miss, compressed data from the external memory based on compression information stored in the compression status bit cache. The compression status for active buffers is dynamically swapped into the compression status cache as needed. Different compression formats may be specified for one or more tiles within an active buffer. One advantage of the disclosed compression status cache is that a lame amount of attached memory may be allocated as compressible memory blocks, without incurring a corresponding die area cost because a portion of the compression status stored off chip in attached memory is cached in the compression status cache.
摘要:
One embodiment of the present invention sets forth a technique for performing a memory access request to compressed data within a virtually mapped memory system comprising an arbitrary number of partitions. A virtual address is mapped to a linear physical address, specified by a page table entry (PTE). The PTE is configured to store compression attributes, which are used to locate compression status for a corresponding physical memory page within a compression status bit cache. The compression status bit cache operates in conjunction with a compression status bit backing store. If compression status is available from the compression status bit cache, then the memory access request proceeds using the compression status. If the compression status bit cache misses, then the miss triggers a fill operation from the backing store. After the fill completes, memory access proceeds using the newly filled compression status information.
摘要:
An infusion system for delivery of therapeutic fluids from a remote source into a patient's body. The system has an infusion assembly, a rotating pivot joint member, a fluid connector assembly, and a sealing assembly retained within the infusion assembly between the housing of the infusion assembly and the rotating pivot joint member. The seal reduces leakage of fluids. The rotating joint may be pivoted to three distinct positions to allow for emplacement on the patient, delivery of the therapeutic fluid to the patient, and protected, sealed closure of the fluid channels to avoid patient fluid backflow.
摘要:
One embodiment of the present invention sets forth a compression status bit cache with deterministic latency for isochronous memory clients of compressed memory. The compression status bit cache improves overall memory system performance by providing on-chip availability of compression status bits that are used to size and interpret a memory access request to compressed memory. To avoid non-deterministic latency when an isochronous memory client accesses the compression status bit cache, two design features are employed. The first design feature involves bypassing any intermediate cache when the compression status bit cache reads a new cache line in response to a cache read miss, thereby eliminating additional, potentially non-deterministic latencies outside the scope of the compression status bit cache. The second design feature involves maintaining a minimum pool of clean cache lines by opportunistically writing back dirty cache lines and, optionally, temporarily blocking non-critical requests that would dirty already clean cache lines. With clean cache lines available to be overwritten quickly, the compression status bit cache avoids incurring additional miss write back latencies.
摘要:
One embodiment of the invention sets forth a mechanism for efficiently processing atomic operations transmitted from multiple general processing clusters to an L2 cache. A tag look-up unit tracks the availability of each cache line in the L2 cache, reserves the necessary cache lines for the atomic operations and transmits the atomic operations to an ALU for processing. The tag look-up unit also increments a reference counter associated with a reserved cache line each time an atomic operation associated with that cache line is received. This feature allows multiple atomic operations associated with the same cache line to be pipelined to the ALU. A ROP unit that includes the ALU may request additional data necessary to process an atomic operation from the L2 cache. Result data is stored in the L2 cache and may also be returned to the general processing clusters.
摘要:
One embodiment of the present invention sets forth a method for implementing ECC protection in an on-chip L2 cache. When data is written to or read from an external memory, logic within the L2 cache is configured to generate ECC check bits and store the ECC check bits in the L2 cache in space typically allocated for storing byte enables. As a result, data stored in the L2 cache may be protected against bit errors without incurring the costs of providing additional storage or complex hardware for the ECC check bits.
摘要:
System for the subcutaneous delivery into the body of a patient of a fluid from a remote vessel. The system includes a main assembly and placement member with a needle. A delivery tube for carrying the fluid is attached at a near end to the remote reservoir or vessel. At removed end, the delivery tube has a needle for engagement with the main assembly. The main assembly includes a rotating member that when the rotating is perpendicular to the main assembly, it will accept the handle and needle for emplacement of the body onto a patient. After the handle and needle are removed, the delivery tube can be attached to the rotating member which can then be rotated down to a position along to and adjacent the skin of the patient. This provides for a flush mounted infusion device.
摘要:
Systems and methods are disclosed for managing the number of affirmatively associated cache lines related to the different sets of a data cache unit. A tag look-up unit implements two thresholds, which may be configurable thresholds, to manage the number of cache lines related to a given set that store dirty data or are reserved for in-flight read requests. If the number of affirmatively associated cache lines in a given set is equal to a maximum threshold, the tag look-up unit stalls future requests that require an available cache line within that set to be affirmatively associated. To reduce the number of stalled requests, the tag look-up unit transmits a high priority clean notification to a frame buffer logic when the number of affirmatively associated cache lines in a given set approaches the maximum threshold. The frame buffer logic then processes requests associated with that set preemptively.