RING TRANSPORT EMPLOYING CLOCK WAKE SUPPRESSION

    公开(公告)号:US20210116956A1

    公开(公告)日:2021-04-22

    申请号:US16659978

    申请日:2019-10-22

    Abstract: An integrated circuit (IC) device includes a ring transport having a plurality of nodes and a wire interconnect coupling the plurality of nodes in a ring. The wire interconnect including a wire to transmit clock wake signals around the ring transport in advance of data signaling representing a data packet. Each node is to switch from a clock gated state to a clocked state responsive to receiving a clock wake signal. The ring transport further includes a sleep controller coupled to a select node of the plurality of nodes. The sleep controller is to configure the select node into a clock suppression state for a specified duration responsive to identifying an idle condition on the ring transport via monitoring of the wire. While in the clock suppression state the node suppresses further transmission of any clock wake signals received at the select node.

    COHERENCY DIRECTORY ENTRY ALLOCATION BASED ON EVICTION COSTS

    公开(公告)号:US20200065246A1

    公开(公告)日:2020-02-27

    申请号:US16108696

    申请日:2018-08-22

    Abstract: A processor partitions a coherency directory into different regions for different processor cores and manages the number of entries allocated to each region based at least in part on monitored recall costs indicating expected resource costs for reallocating entries. Examples of monitored recall costs include a number of cache evictions associated with entry reallocation, the hit rate of each region of the coherency directory, and the like, or a combination thereof. By managing the entries allocated to each region based on the monitored recall costs, the processor ensures that processor cores associated with denser memory access patterns (that is, memory access patterns that more frequently access cache lines associated with the same memory pages) are assigned more entries of the coherency directory.

    RINSING CACHE LINES FROM A COMMON MEMORY PAGE TO MEMORY

    公开(公告)号:US20190179770A1

    公开(公告)日:2019-06-13

    申请号:US15839089

    申请日:2017-12-12

    Abstract: A processing system rinses, from a cache, those cache lines that share the same memory page as a cache line identified for eviction. A cache controller of the processing system identifies a cache line as scheduled for eviction. In response, the cache controller, identifies additional “dirty victim” cache lines (cache lines that have been modified at the cache and not yet written back to memory) that are associated with the same memory page, and writes each of the identified cache lines to the same memory page. By writing each of the dirty victim cache lines associated with the memory page to memory, the processing system reduces memory overhead and improves processing efficiency.

    MEMORY REQUEST THROTTLING TO CONSTRAIN MEMORY BANDWIDTH UTILIZATION

    公开(公告)号:US20190179757A1

    公开(公告)日:2019-06-13

    申请号:US15838809

    申请日:2017-12-12

    Abstract: A processing system includes an interconnect fabric coupleable to a local memory and at least one compute cluster coupled to the interconnect fabric. The compute cluster includes a processor core and a cache hierarchy. The cache hierarchy has a plurality of caches and a throttle controller configured to throttle a rate of memory requests issuable by the processor core based on at least one of an access latency metric and a prefetch accuracy metric. The access latency metric represents an average access latency for memory requests for the processor core and the prefetch accuracy metric represents an accuracy of a prefetcher of a cache of the cache hierarchy.

    REDUCING CACHE FOOTPRINT IN CACHE COHERENCE DIRECTORY

    公开(公告)号:US20190163632A1

    公开(公告)日:2019-05-30

    申请号:US15825880

    申请日:2017-11-29

    Abstract: A method includes monitoring, at a cache coherence directory, states of cachelines stored in a cache hierarchy of a data processing system using a plurality of entries of the cache coherence directory. Each entry of the cache coherence directory is associated with a corresponding cache page of a plurality of cache pages, and each cache page representing a corresponding set of contiguous cachelines. The method further includes selectively evicting cachelines from a first cache of the cache hierarchy based on cacheline utilization densities of cache pages represented by the corresponding entries of the plurality of entries of the cache coherence directory.

    EXPANDABLE BUFFER FOR MEMORY TRANSACTIONS
    9.
    发明申请

    公开(公告)号:US20190163394A1

    公开(公告)日:2019-05-30

    申请号:US15824539

    申请日:2017-11-28

    Abstract: A processing system employs an expandable memory buffer that supports enlarging the memory buffer when the processing system generates a large number of long latency memory transactions. The hybrid structure of the memory buffer allows a memory controller of the processing system to store a larger number of memory transactions while still maintaining adequate transaction throughput and also ensuring a relatively small buffer footprint and power consumption. Further, the hybrid structure allows different portions of the buffer to be placed on separate integrated circuit dies, which in turn allows the memory controller to be used in a wide variety of integrated circuit configurations, including configurations that use only one portion of the memory buffer.

    SYSTEMS AND METHOD FOR DELAYED CACHE UTILIZATION

    公开(公告)号:US20180217931A1

    公开(公告)日:2018-08-02

    申请号:US15936828

    申请日:2018-03-27

    Abstract: A system for managing cache utilization includes a processor core, a lower-level cache, and a higher-level cache. In response to activating the higher-level cache, the system counts lower-level cache victims evicted from the lower-level cache. While a count of the lower-level cache victims is not greater than a threshold number, the system transfers each lower-level cache victim to a system memory without storing the lower-level cache victim to the higher-level cache. When the count of the lower-level cache victims is greater than the threshold number, the system writes each lower-level cache victim to the higher-level cache. In this manner, if the higher-level cache is deactivated before the threshold number of lower-level cache victims is reached, the higher-level cache is empty and thus may be deactivated without flushing.

Patent Agency Ranking