摘要:
A multi-clock pulse synchronizer circuit with and IN-section receiving and storing prescribed in-pulses and input clock signals and responsively outputting intermediate pulses; and an OUT-section for receiving and storing the intermediate pulses, synchronous with certain output clock signals and processing them to generate certain output-signals, for better avoiding metastability.
摘要:
In a multiprocessor computer system where a number of "agents" can compete for access to a "resource", a method of ameliorating "Livelock" and preventing any such agent from being unduly frustrated from such access, this method comprising: arranging the system to include an arbitrating unit and a common system bus connecting a number of processors, plus an Avoidance unit included in each processor and including a Random-Number generator and automatic "Random Backoff" that causes an agent that fails to secure access to wait for one or more given random time periods T.sup.B before reattempting such access, with each said time period T.sup.B being provided by the random number generator so as to likely differentiate from competing agents.
摘要:
Described are techniques to stabilize storage devices receiving signals from plural asynchronous docks, especially to avoid "metastability", in particular, a multi-dock pulse synchronizer circuit with an IN-section for receiving and storing prescribed impulses and input cock signals, and for responsively outputting intermediate signals: and an OUT-section for receiving and storing the intermediate pulses, synchronous with certain output clock signals and processing them to generate certain output-signals which avoid metastability.
摘要:
A method of arranging and operating a cache in a multi-processor computer system with N local processors, where a requesting device can request a cycle to be issued, where the method involves "posting" the "cycles", while also storing information for completing a cycle in a Queue and causing the requesting device to be issued "termination" immediately, rather than waiting for the cycle to reach its destination.
摘要:
In a multiprocessor computer system where a number of "agents" can compete for access to a "resource", a method of ameliorating "Livelock" and preventing any such agent from being unduly frustrated from such access, this method comprising: arranging the system to include an arbitrating unit and a common system bus connecting a number of processors, plus an Avoidance unit included in each processor and including a Random-Number generator and automatic "Random Backoff" that causes an agent that fails to secure access to wait for one or more given random time periods T.sub.B before reattempting such access, with each said time period T.sub.B being provided by the random number generator so as to likely differentiate from competing agents.
摘要:
A method and apparatus for accelerated shared data migration between cores. Using an Always Migrate protocol, when a migratory probe hits a directory entry in either modified or owned state, the entry is transitioned to an owned state, and a source done command is sent without sending cache block ownership or state information to the directory.
摘要:
In an embodiment, a node comprises a packet scheduler configured to schedule packets to be transmitted on a link and an interface circuit coupled to the packet scheduler and configured to transmit the packets on the link. The interface circuit is configured to generate error detection data covering the packets, wherein the error detection data is transmitted between packets on the link. The interface circuit is configured to cover up to N packets with one transmission of error detection data, where N is an integer >=2. The number of packets covered with one transmission of error detection data is determined by the interface circuit dependent on an availability of packets to transmit. In another embodiment, the interface circuit is configured to dynamically vary a frequency of transmission of the error detection data on the link based on an amount of bandwidth being consumed on the link.
摘要:
In an embodiment, a node comprises a packet scheduler configured to schedule packets to be transmitted on a link and an interface circuit coupled to the packet scheduler and configured to transmit the packets on the link. The interface circuit is configured to generate error detection data covering the packets, wherein the error detection data is transmitted between packets on the link. The interface circuit is configured to cover up to N packets with one transmission of error detection data, where N is an integer >=2. The number of packets covered with one transmission of error detection data is determined by the interface circuit dependent on an availability of packets to transmit. In another embodiment, the interface circuit is configured to dynamically vary a frequency of transmission of the error detection data on the link based on an amount of bandwidth being consumed on the link.
摘要:
Techniques for improving cache latency include distributing cache lines across regions of the cache having various latencies. The latencies of the regions may vary as a function of the distance between an individual region of the cache and a cache controller. The cache controller may predict an addressable unit of interest for a next access to a data line stored in a cache line. The predicted addressable unit of interest is stored in a region of the cache having the lowest latency as compared to other regions of the cache. The addressable unit of interest may be a most-recently used addressable unit, an addressable unit sequentially following a most-recently used addressable unit, or determined by other criterion. The invention contemplates using at least one control bit to indicate which addressable unit is stored in the region having the lowest latency.