Abstract:
A method and system for allocating data streams that includes receiving, at an allocator, a data stream. The data stream includes a memory address and data associated with the memory address. The method also includes examining, by the allocator, the data stream to make a determination that the data stream is a soft allocating data stream, and then sending, from the allocator based on the determination, a plurality of write probes to a plurality of caches, wherein each write probe of the plurality of write probes includes at least part of the memory address. Additionally, the method includes receiving, at the allocator in response to a write probe of the plurality of write probes, a cache line present acknowledgement from a cache of the plurality of caches, and directing, by the allocator in response to the cache line present acknowledgement, the data of the data stream to the cache.
Abstract:
A method for removal of an offlining cache agent, including: initiating an offlining of the offlining cache agent from communicating with a plurality of participating cache agents while a first transaction is in progress; setting, based on initiating the offlining, an ignore response indicator corresponding to the offlining cache agent on each of the plurality of participating cache agents; offlining, based on setting the ignore response indicator, the offlining cache agent; and ignoring, based on setting the ignore response indicator, a first response to the transaction from the offlining cache agent.
Abstract:
A method for removal of an offlining cache agent, including: initiating an offlining of the offlining cache agent from communicating with a plurality of participating cache agents while a first transaction is in progress; setting, based on initiating the offlining, an ignore response indicator corresponding to the offlining cache agent on each of the plurality of participating cache agents; offlining, based on setting the ignore response indicator, the offlining cache agent; and ignoring, based on setting the ignore response indicator, a first response to the transaction from the offlining cache agent.
Abstract:
A method for cache coherence, including: broadcasting, by a requester cache (RC) over a partially-ordered request network (RN), a peer-to-peer (P2P) request for a cacheline to a plurality of slave caches; receiving, by the RC and over the RN while the P2P request is pending, a forwarded request for the cacheline from a gateway; receiving, by the RC and after receiving the forwarded request, a plurality of responses to the P2P request from the plurality of slave caches; setting an intra-processor state of the cacheline in the RC, wherein the intra-processor state also specifies an inter-processor state of the cacheline; and issuing, by the RC, a response to the forwarded request after setting the intra-processor state and after the P2P request is complete; and modifying, by the RC, the intra-processor state in response to issuing the response to the forwarded request.
Abstract:
A method and system for allocating data streams that includes receiving, at an allocator, a data stream. The data stream includes a memory address and data associated with the memory address. The method also includes examining, by the allocator, the data stream to make a determination that the data stream is a soft allocating data stream, and then sending, from the allocator based on the determination, a plurality of write probes to a plurality of caches, wherein each write probe of the plurality of write probes includes at least part of the memory address. Additionally, the method includes receiving, at the allocator in response to a write probe of the plurality of write probes, a cache line present acknowledgement from a cache of the plurality of caches, and directing, by the allocator in response to the cache line present acknowledgement, the data of the data stream to the cache.
Abstract:
A method and system for dynamic flow control using credit sharing that includes allocating portions of credits to senders, wherein each of the credits is for communicating with a receiver; transmitting, by a first sender of the senders, a first message to the receiver using a first credit of a first portion of the credits; decrementing, in response to transmitting the first message, a credit balance of the first sender by one; and determining that the credit balance of the first sender is zero. The method also includes sending to a second sender of the senders, by the first sender, in response to the credit balance being zero, a first request for a second credit; receiving from the second sender, in response to the first request, a first response comprising the second credit; and transmitting, by the first sender, a second message to the receiver using the second credit.
Abstract:
A method and system for dynamic flow control using credit sharing that includes allocating portions of credits to senders, wherein each of the credits is for communicating with a receiver; transmitting, by a first sender of the senders, a first message to the receiver using a first credit of a first portion of the credits; decrementing, in response to transmitting the first message, a credit balance of the first sender by one; and determining that the credit balance of the first sender is zero. The method also includes sending to a second sender of the senders, by the first sender, in response to the credit balance being zero, a first request for a second credit; receiving from the second sender, in response to the first request, a first response comprising the second credit; and transmitting, by the first sender, a second message to the receiver using the second credit.
Abstract:
A method for cache coherence, including: broadcasting, by a requester cache (RC) over a partially-ordered request network (RN), a peer-to-peer (P2P) request for a cacheline to a plurality of slave caches; receiving, by the RC and over the RN while the P2P request is pending, a forwarded request for the cacheline from a gateway; receiving, by the RC and after receiving the forwarded request, a plurality of responses to the P2P request from the plurality of slave caches; setting an intra-processor state of the cacheline in the RC, wherein the intra-processor state also specifies an inter-processor state of the cacheline; and issuing, by the RC, a response to the forwarded request after setting the intra-processor state and after the P2P request is complete; and modifying, by the RC, the intra-processor state in response to issuing the response to the forwarded request.