摘要:
Computer systems with direct updating of cache (e.g., primary L1 cache) memories of a processor, such as a central processing unit (CPU) or graphics processing unit (GPU). Special addresses are reserved for high speed memory. Memory access requests involving these reserved addresses are routed directly to the high speed memory. Memory access requests not involving these reserved addresses are routed to memory external to the processor.
摘要:
Methods and systems for dynamically adjusting credits used to distribute available bus bandwidth among multiple virtual channels, based on the workload of each virtual channel, are provided. Accordingly, for some embodiments, virtual channels with higher workloads relative to other virtual channels may receive a higher allocation of bus bandwidth (more credits).
摘要:
Methods and systems for dynamically adjusting credits used to distribute available bus bandwidth among multiple virtual channels, based on the workload of each virtual channel, are provided. Accordingly, for some embodiments, virtual channels with higher workloads relative to other virtual channels may receive a higher allocation of bus bandwidth (more credits).
摘要:
Methods and apparatus for reducing the impact of latency associated with decrypting encrypted data are provided. Rather than wait until an entire packet of encrypted data is validated (e.g., by checking for data transfer errors), the encrypted data may be pipelined to a decryption engine as it is received, thus allowing decryption to begin prior to validation. In some cases, the decryption engine may be notified of data transfer errors detected during the validation process, in order to prevent reporting false security violations.
摘要:
Methods and apparatus that may be utilized in systems to reduce the impact of latency associated with encrypting data on non-encrypted data are provided. Secure and non-secure data may be routed independently. Thus, non-secure data may be forwarded on (e.g., to targeted write buffers), without waiting for previously sent secure data to be encrypted. As a result, non-secure data may be made available for subsequent processing much earlier than in conventional systems utilizing a common data path for both secure and non-secure data.
摘要:
Methods and apparatus for reducing the impact of latency associated with decrypting encrypted data are provided. Rather than wait until an entire packet of encrypted data is validated (e.g., by checking for data transfer errors), the encrypted data may be pipelined to a decryption engine as it is received, thus allowing decryption to begin prior to validation. In some cases, the decryption engine may be notified of data transfer errors detected during the validation process, in order to prevent reporting false security violations.
摘要:
Methods and apparatus for reducing the impact of latency associated with decrypting encrypted data are provided. Rather than wait until an entire packet of encrypted data is validated (e.g., by checking for data transfer errors), the encrypted data may be pipelined to a decryption engine as it is received, thus allowing decryption to begin prior to validation. In some cases, the decryption engine may be notified of data transfer errors detected during the validation process, in order to prevent reporting false security violations.
摘要:
Methods and apparatus that may be utilized in systems to reduce the impact of latency associated with encrypting data on non-encrypted data are provided. Secure and non-secure data may be routed independently. Thus, non-secure data may be forwarded on (e.g., to targeted write buffers), without waiting for previously sent secure data to be encrypted. As a result, non-secure data may be made available for subsequent processing much earlier than in conventional systems utilizing a common data path for both secure and non-secure data.
摘要:
A circuit arrangement and method utilize cache injection logic to perform a cache inject and lock operation to inject a cache line in a cache memory and automatically lock the cache line in the cache memory in parallel with communication of the cache line to a main memory. The cache injection logic may additionally limit the maximum number of locked cache lines that may be stored in the cache memory, e.g., by aborting a cache inject and lock operation, injecting the cache line without locking, or unlocking and/or evicting another cache line in the cache memory.
摘要:
By merging or adding the color contributions from objects intersected by secondary rays, the image processing system may accumulate color contributions to pixels from objects intersected by secondary rays as the further color contributions are determined. Furthermore, by associating a scaling factor of color contribution with objects and with secondary rays which intersect the objects, color contributions due to secondary ray/object intersections may be calculated at a later time than the color contribution to a pixel from original ray/object intersection. Consequently, it is not necessary for a vector throughput engine or a workload manager to wait for all secondary ray/object intersections to be determined before updating the color of a pixel.