摘要:
A system for time-ordered execution of load instructions. More specifically, the system enables just-in-time delivery of data requested by a load instruction. The system consists of a processor, an L1 data cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates a plurality of architected time dependency fields of a load instruction to create a plurality of dependency fields. The dependency fields holds a relative dependency value which is utilized to order the load instruction in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The load instruction is sent from RTOQ to the L1 data cache at a particular time so that the data requested is loaded from the L1 data cache at the time specified by one of the dependency fields. The dependency fields are prioritized so that the cycle corresponding to the highest priority field which is available is utilized.
摘要:
A system for time-ordered execution of load instructions. More specifically, the system enables just-in-time delivery of data requested by a load instruction. The system consists of a processor, an L1 data cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates an architected time dependency bit field of a load instruction to create a Distance of Dependency (DoD) bit field. The DoD bit field holds a relative dependency value which is utilized to order the load instruction in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The load instruction is sent from RTOQ to the L1 data cache at a particular time so that the data requested is loaded from the L1 data cache at the time specified by the DoD bit field. In the preferred embodiment, an acknowledgement is sent to the processing unit when the time specified is available in the RTOQ.
摘要:
A method for converting a distance of dependency (DoD) value to a cycle of dependency (CoD) value is disclosed. The method comprises the steps of (i) simulating a dependency system timer (DST) on a data processing system, with the DST having a present time measured in cycles and a period, (ii) adding a DoD value of N bits to the present time to yield a resulting time of the least significant N bits of said adding step, and (iii) creating the CoD value by appending a carry over of the adding step to the resulting time, where when the carry over is not equal to zero, the carry over signals a user of the CoD value to wait until a next period before the CoD value should be applied. In one embodiment, the carry-over value is added to the value corresponding to a respective alternating period to yield an even/odd bit which determines the period.
摘要:
A method for ordering the time of issuing of a load instruction from a lower level (L2) cache controller to its L2 cache in a data processing system to enable delivery of a load data at a time it is required by its downstream dependency is disclosed. The method comprises the steps of (i) determining a cycle of dependency (CoD) of the load data, where the CoD corresponds to an exact synchronized timer (ST) time, measured in cycles, on which said data is required by said downstream dependency from the L2 cache, and (ii) issuing the load instruction to said L2 cache at said time to synchronize a providing of said data to a pipeline of a system resource with a request by its downstream dependency. In the preferred embodiment of the invention, a distance of dependency (DoD) value is first appended to the load instruction. The DoD value is then converted to a CoD value when a miss occurs at the internal (L1) cache.
摘要:
A system for time-ordered issuance of instruction fetch requests (IFR). More specifically, the system enables just-in-time delivery of instructions requested by an IFR. The system consists of a processor, an L1 instruction cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates an architected time dependency field of an IFR to create a Time of Dependency (ToD) field. The ToD field holds a time dependency value which is utilized to order the IFR in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The IFR is issued from RTOQ to the L1 instruction cache so that the requested instruction is fetched from the L1 instruction cache at the time specified by the ToD value. In an alternate embodiment the ToD is converted to a CoD and the instruction is fetched from a lower level cache at the CoD value.
摘要:
A system which permits dynamic verification of the availability of a desired time at which to load a data requested by a load instruction. The system comprises (i) means for appending a time dependency value to the load instruction, where the time dependency value corresponds to the desired time, (ii) means for verifying that said desired time is available for loading said data, and (iii) means for sending an acknowledgement (ACK) when the desired time is available, where a processor reserves the system resources for accepting the data at the desired time in response to the ACK.
摘要:
In response to a need to initiate one or more global operations, a bus master within a multiprocessor system issues a combined token and operation request in a single bus transaction on a bus coupled to the bus master. The combined token and operation request solicits a single existing token required to complete the global operations within the multiprocessor system and identifies the first of the global operations to be processed with the token, if granted. Once a bus master is granted the token, no other bus master will be granted the token until the current token owner explicitly requests release. The current token owner repeats the combined token and operation request for each global operation which needs to be initiated and, on the last global operation, issues a combined request with an explicit release. Acknowledgement of the combined request with release implies release of the token for use by other bus masters.
摘要:
Only a single snooper queue for global operations within a multiprocessor system is implemented within each bus snooper, controlled by a single token allowing completion of one operation. A bus snooper, upon detecting a combined token and operation request, begins speculatively processing the operation if the snooper is not already busy. The snooper then watches for a combined response acknowledging the combined request or a subsequent token request from the same processor, which indicates that the originating processor has been granted the sole token for completing global operations, before completing the operation. When processing an operation from a combined request and detecting an operation request (only) from a different processor, which indicates that another processor has been granted the token, the snooper suspends processing of the current operation and begins processing the new operation. If the snooper is busy when a combined request is received, the snooper retries the operation portion of the combined request and, upon detecting a subsequent operation request (only) for the operation, begins processing the operation at that time if not busy. Snoop logic for large multiprocessor systems is thus simplified, with conflict reduced to situations in which multiple processors are competing for the token.
摘要:
Logically in line caches within a multilevel cache hierarchy are jointly controlled by single cache controller. By combining the cache controller and snoop logic for different levels within the cache hierarchy, separate queues are not required for each level. During a cache access, cache directories are looked up in parallel. Data is retrieved from an upper cache if hit, or from the lower cache if the upper cache misses and the lower cache hits. LRU units may be updated in parallel based on cache directory hits. Alternatively, the lower cache LRU unit may be updated based on cache memory accesses rather than cache directory hits, or the cache hierarchy may be provided with user selectable modes of operation for both LRU unit update schemes. The merged vertical cache controller mechanism does not require the lower cache memory to be inclusive of the upper cache memory. A novel deallocation scheme and update protocol may be implemented in conjunction with the merged vertical cache controller mechanism.
摘要:
Combined response logic for a bus receives a combined data access and cast out/deallocate operation initiating by a storage device within a specific level of a storage hierarchy, with a coherency state and LRU position of the cast out/deallocate victim appended. Snoopers on the bus drive snoop responses to the combined operation with the coherency state and/or LRU position of locally-stored cache lines corresponding to the victim appended. The combined response logic determines, from the coherency state and LRU position information appended to the combined operation and the snoop responses, whether an update of the LRU position and/or coherency state of a cache line corresponding to the victim within one of the snoopers is required. If so, the combined response logic selects a snooper storage device to have at least the LRU position of a respective cache line corresponding to the victim updated, and appends an update command identifying the selected snooper to the combined response. The snooper selected to be updated may be randomly chosen, selected based on LRU position of the cache line corresponding to the victim within respective storage, or selected based on other criteria.