摘要:
In a cache which writes new data over less recently used data, methods and apparatus which dispense with the convention of marking new cache data as most recently used. Instead, non-referenced data is marked as less recently used when it is written into a cache, and referenced data is marked as more recently used when it is written into a cache. Referenced data may correspond to fetch data, and non-referenced data may correspond to prefetch data. Upon fetch of a data value from the cache, its use status may be updated to more recently used. The methods and apparatus have the affect of preserving (n−1)/n of a cache's entries for the storage of fetch data, while limiting the storage of prefetch data to 1/n of a cache's entries. Pollution which results from unneeded prefetch data is therefore limited to 1/n of the cache. In reality, however, pollution from unneeded prefetch data will be significantly less, as many prefetch data values will ultimately be fetched prior to their overwrite with new data, and upon their fetch, their use status can be upgraded to most recently used, thus ensuring their continued maintenance in the cache.
摘要:
An apparatus for and a method of decoupling at least two multi-stage pipelines are described. At least two paths of data through which data from the first pipeline is send to the second pipeline are provided. During a pipelined execution of a task in the at least two pipelines, the second pipeline may not require every data produced in the first pipeline to process at least some subset of the task. The first pipeline may not be able to produce all data required by each of the stages of the second pipeline. One of the two data paths provides an early data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline early in time. The other of the two data paths provides a late data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline later in time. Each data path may comprise a buffer, e.g., a FIFO.
摘要:
In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
摘要:
Systems, methodologies, media, and other embodiments associated with acquiring instruction addresses associated with performance monitoring events are described. One exemplary system embodiment includes logic for recording instruction and state data associated with events countable by performance monitoring logic associated with a pipelined processor. The exemplary system embodiment may also include logic for traversing the instruction and state data on a cycle count basis. The exemplary system may also include logic for traversing the instruction and state data on a retirement count basis.