摘要:
A method, system, and computer program product for providing lines of data from shared resources to caching agents are provided. The method, system, and computer program product provide for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.
摘要:
A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure includes an apparatus for administering an access conflict in a cache. The apparatus includes the cache, a cache controller, and a superscalar computer processor. The cache controller is capable of receiving a write address and write data from the superscalar computer processor's store memory instruction execution unit and a read address for read data from the superscalar computer processor's load memory instruction execution unit, for writing and reading data from a same cache line in the cache simultaneously on a current clock cycle; storing the write data in the same cache line on the current clock cycle; stalling, in the load memory instruction execution unit, a corresponding load microinstruction; and reading from the cache on a subsequent clock cycle read data from the read address.
摘要:
A computer system is provided that has a main memory and a plurality of processor agents each having a last level cache and a hot cache. Each processor agent is configured to store cache lines in the last level cache and the hot cache. The hot cache is configured to store cache lines in the hot coherency state. Cache lines in the hot coherency state are cache lines that have been read and modified. The hot cache is smaller in size than the last level cache to facilitate fast access to the cache lines in the hot coherency state in response to a future request to read with intent to modify. A bus connects each of the plurality of processor agents to the main memory.
摘要:
Administering an access conflict in a computer memory cache, including receiving in a memory cache controller a write address and write data from a store memory instruction execution unit of a superscalar computer processor and a read address for read data from a load memory instruction execution unit of the superscalar computer processor, for the write data to be written to and the read data to be read from a same cache line in the computer memory cache simultaneously on a current clock cycle; storing by the memory cache controller the write data in the same cache line on the current clock cycle; stalling, by the memory cache controller in the load memory instruction execution unit, a corresponding load microinstruction; and reading by the memory cache controller from the computer memory cache on a subsequent clock cycle read data from the read address.
摘要:
Data processing workload administration in a cloud computing environment including distributing data processing jobs among a plurality of clouds, each cloud comprising a network-based, distributed data processing system that provides one or more cloud computing services; deploying, by a job placement engine in each cloud according to the workload execution policy onto servers in each cloud, the data processing jobs distributed to each cloud; determining, by each job placement engine during execution of each data processing job, whether workload execution policy for each deployed job continues to be met by computing resources within the cloud where each job is deployed; and advising, by each job placement engine, the workload policy manager when workload execution policy for a particular job cannot continue to be met by computing resources within the cloud where the particular job is deployed.
摘要:
A design structure embodied in a machine readable storage medium for designing, manufacturing, and testing a system for providing lines of data from shared resources to caching agents are provided. The system provides for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.
摘要:
A design structure embodied in a machine readable storage medium for of designing, manufacturing, and/or testing for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores is provided. The design structure includes means for receiving from a processor core a request to load a cache line in the shared cache; means for determining whether the shared cache is full; means for determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and means for evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.
摘要:
A system and method for dynamically selecting the data fetch path for improving the performance of the system improves data access latency by dynamically adjusting data fetch paths based on application data fetch characteristics. The application data fetch characteristics are determined through the use of a hit/miss tracker. It reduces data access latency for applications that have a low data reuse rate (streaming audio, video, multimedia, games, etc.) which will improve overall application performance. It is dynamic in a sense that at any point in time when the cache hit rate becomes reasonable (defined parameter), the normal cache lookup operations will resume. The system utilizes a hit/miss tracker which tracks the hits/misses against a cache and, if the miss rate surpasses a prespecified rate or matches an application profile, the hit/miss tracker causes the cache to be bypassed and the data is pulled from main memory or another cache thereby improving overall application performance.
摘要:
An apparatus, system, and method are disclosed for improving cache coherency processing. The method includes determining that a first processor in a multiprocessor system receives a cache miss. The method also includes determining whether an application associated with the cache miss is running on a single processor core and/or whether the application is running on two or more processor cores that share a cache. A cache coherency algorithm is executed in response to determining that the application associated with the cache miss is running on two or more processor cores that do not share a cache, and is skipped in response to determining that the application associated with the cache miss is running on one of a single processor core and two or more processor cores that share a cache.
摘要:
Methods and systems for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores are provided. Embodiments include receiving from a processor core a request to load a cache line in the shared cache; determining whether the shared cache is full; determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.