摘要:
A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node.
摘要:
Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request.
摘要:
A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node.
摘要:
Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
摘要:
A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
摘要:
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
摘要:
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
摘要:
A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.