摘要:
There is provided a method, system and computer program product for generating trace data related to a data processing system event. The method includes: receiving an instruction relating to the system event from a location in the system; determining a minimum number of trace segment records required to record instruction information; and creating a trace segment table including the number of trace segment records, the number of trace segment records including at least one instruction record.
摘要:
There is provided a method, system and computer program product for generating trace data related to a data processing system event. The method includes: receiving an instruction relating to the system event from a location in the system; determining a minimum number of trace segment records required to record instruction information; and creating a trace segment table including the number of trace segment records, the number of trace segment records including at least one instruction record.
摘要:
A system, method and computer program product for sampling computer system performance data are provided. The system includes a sample buffer to store instrumentation data while capturing trace data in a trace array, where the instrumentation data enables measurement of computer system performance. The system further includes a sample interrupt generator to assert a sample interrupt indicating that the instrumentation data is available to read. The sample interrupt is asserted in response to storing the instrumentation data in the sample buffer.
摘要:
A system, method and computer program product for sampling computer system performance data are provided. The system includes a sample buffer to store instrumentation data while capturing trace data in a trace array, where the instrumentation data enables measurement of computer system performance. The system further includes a sample interrupt generator to assert a sample interrupt indicating that the instrumentation data is available to read. The sample interrupt is asserted in response to storing the instrumentation data in the sample buffer.
摘要:
A method for handling cache coherency includes allocating a tag when a cache line is not exclusive in a data cache for a store operation, and sending the tag and an exclusive fetch for the line to coherency logic. An invalidation request is sent within a minimum amount of time to an I-cache, preferably only if it has fetched to the line and has not been invalidated since, which request includes an address to be invalidated, the tag, and an indicator specifying the line is for a PSC operation. The method further includes comparing the request address against stored addresses of prefetched instructions, and in response to a match, sending a match indicator and the tag to an LSU, within a maximum amount of time. The match indicator is timed, relative to exclusive data return, such that the LSU can discard prefetched instructions following execution of the store operation that stores to a line subject to an exclusive data return, and for which the match is indicated.
摘要:
Method and system for a multi-level virtual/real cache system with synonym resolution. An exemplary embodiment includes a multi-level cache hierarchy, including a set of L1 caches associated with one or more processor cores and a set of L2 caches, wherein the set of L1 caches are a subset of the set of L2 caches, wherein the set of L1 caches underneath a given L2 cache are associated with one or more of the processor cores.
摘要:
A method for handling cache coherency includes allocating a tag when a cache line is not exclusive in a data cache for a store operation, and sending the tag and an exclusive fetch for the line to coherency logic. An invalidation request is sent within a minimum amount of time to an I-cache, preferably only if it has fetched to the line and has not been invalidated since, which request includes an address to be invalidated, the tag, and an indicator specifying the line is for a PSC operation. The method further includes comparing the request address against stored addresses of prefetched instructions, and in response to a match, sending a match indicator and the tag to an LSU, within a maximum amount of time. The match indicator is timed, relative to exclusive data return, such that the LSU can discard prefetched instructions following execution of the store operation that stores to a line subject to an exclusive data return, and for which the match is indicated.
摘要:
A pipelined microprocessor includes circuitry for store forwarding by performing: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one complete block of data; merging store data from the store request with the complete block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated complete block of data into a store data queue; for each load request, where the load request may require at least one updated completed block of data: determining if store forwarding is appropriate for the load request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the load request.
摘要:
A pipelined processor includes circuitry adapted for store forwarding, including: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one block of data; merging store data from the store request with the block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated block of data into a store data queue; for each additional store request, where the additional store request requires at least one updated block of data: determining if store forwarding is appropriate for the additional store request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the additional store request.
摘要:
A pipelined processor includes circuitry adapted for store forwarding, including: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one block of data; merging store data from the store request with the block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated block of data into a store data queue; for each additional store request, where the additional store request requires at least one updated block of data: determining if store forwarding is appropriate for the additional store request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the additional store request.