摘要:
A method and apparatus for updating global branch history information are disclosed. A dynamic branch predictor within a data processing system includes a global branch history (GBH) buffer and a branch history table. The GBH buffer contains GBH information of a group of the most recent branch instructions. The branch history table includes multiple entries, each entry is associated with one or more branch instructions. The GBH information from the GBH buffer can be used to index into the branch history table to obtain a branch prediction signal. In response to a fetch group of instructions, a fixed number of GBH bits is shifted into the GBH buffer. The number of GBH bits is the same regardless of the number of branch instructions within the fetch group of instructions. In addition, there is a unique bit pattern associated with the case of no taken branch in the fetch group, regardless of the number of not-taken branches of even if there are any branches in the fetch group.
摘要:
Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
摘要:
Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
摘要:
A method, apparatus and computer program product are provided for implementing polymorphic branch history table (BHT) reconfiguration. A BHT includes a plurality of predetermined configurations corresponding predetermined operational modes. A first BHT configuration is provided. Checking is provided to identify improved performance with another BHT configuration. The BHT is reconfigured to provide improved performance based upon the current workload.
摘要:
In a method of using a cache in a computer, the computer is monitored to detect an event that indicates that the cache is to be reconfigured into a metadata state. When the event is detected, the cache is reconfigured so that a predetermined portion of the cache stores metadata. A computational circuit employed in association with a computer includes a cache, a cache event detector circuit, and a cache reconfiguration circuit. The cache event detector circuit detects an event relative to the cache. The cache reconfiguration circuit reconfigures the cache so that a predetermined portion of the cache stores metadata when the cache event detector circuit detects the event.
摘要:
An apparatus and method for handling data cache misses out-of-order for asynchronous pipelines are provided. The apparatus and method associates load tag (LTAG) identifiers with the load instructions and uses them to track the load instruction across multiple pipelines as an index into a load table data structure of a load target buffer. The load table is used to manage cache “hits” and “misses” and to aid in the recycling of data from the L2 cache. With cache misses, the LTAG indexed load table permits load data to recycle from the L2 cache in any order. When the load instruction issues and sees its corresponding entry in the load table marked as a “miss,” the effects of issuance of the load instruction are canceled and the load instruction is stored in the load table for future reissuing to the instruction pipeline when the required data is recycled.
摘要:
A method, apparatus and computer program product are provided for implementing polymorphic reconfiguration of a cache size. A cache includes a plurality of physical sub-banks. A first cache configuration is provided. Then checking is provided to identify improved performance with another cache configuration. The cache size is reconfigured to provide improved performance based upon the current workload.
摘要:
An apparatus and method utilize compressed cache lines that incorporate embedded prefetch history data associated with such cache lines. In particular, by compressing at least a portion of the data in a cache line, additional space may be freed up in the cache line to embed prefetch history data associated with the data in the cache line. By doing so, long-lived prefetch history data may essentially be embedded in a cache line and retrieved in association with that cache line to initiate the prefetching of additional data that is likely to be accessed based upon historical data generated for that cache line, and often with no little or no additional storage overhead.