DYNAMIC INITIAL CACHE LINE COHERENCY STATE ASSIGNMENT IN MULTI-PROCESSOR SYSTEMS
    11.
    发明申请
    DYNAMIC INITIAL CACHE LINE COHERENCY STATE ASSIGNMENT IN MULTI-PROCESSOR SYSTEMS 有权
    多处理器系统中的动态初始缓存行状态分配

    公开(公告)号:US20090019230A1

    公开(公告)日:2009-01-15

    申请号:US11775085

    申请日:2007-07-09

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0815

    摘要: A method, system, and computer program product for providing lines of data from shared resources to caching agents are provided. The method, system, and computer program product provide for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.

    摘要翻译: 提供了一种用于从共享资源到高速缓存代理提供数据行的方法,系统和计算机程序产品。 方法,系统和计算机程序产品提供用于从存储在共享资源中的数据行接收来自缓存代理的请求,将多个相关性状态中的一个作为数据行的初始一致性状态分配, 所述多个相关性状态可被分配为所述数据行的初始相关性状态,并且以分配给所述数据线的初始相关性状态向所述缓存代理提供所述数据行。

    STRUCTURE FOR ADMINISTERING AN ACCESS CONFLICT IN A COMPUTER MEMORY CACHE
    12.
    发明申请
    STRUCTURE FOR ADMINISTERING AN ACCESS CONFLICT IN A COMPUTER MEMORY CACHE 审中-公开
    在计算机存储器高速缓存中管理访问冲突的结构

    公开(公告)号:US20080201531A1

    公开(公告)日:2008-08-21

    申请号:US12105806

    申请日:2008-04-18

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0857

    摘要: A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure includes an apparatus for administering an access conflict in a cache. The apparatus includes the cache, a cache controller, and a superscalar computer processor. The cache controller is capable of receiving a write address and write data from the superscalar computer processor's store memory instruction execution unit and a read address for read data from the superscalar computer processor's load memory instruction execution unit, for writing and reading data from a same cache line in the cache simultaneously on a current clock cycle; storing the write data in the same cache line on the current clock cycle; stalling, in the load memory instruction execution unit, a corresponding load microinstruction; and reading from the cache on a subsequent clock cycle read data from the read address.

    摘要翻译: 提供了体现在用于设计,制造和/或测试设计的机器可读存储介质中的设计结构。 该设计结构包括用于管理高速缓存中的访问冲突的装置。 该装置包括高速缓存,高速缓存控制器和超标量计算机处理器。 高速缓存控制器能够接收来自超标量计算机处理器的存储器指令执行单元的写入地址和写入数据,以及来自超标量计算机处理器的加载存储器指令执行单元的用于读取数据的读取地址,用于从相同高速缓存写入和读取数据 在当前时钟周期内同时在高速缓存中行; 将当前时钟周期的写入数据存储在相同的高速缓存行中; 停止,在加载存储器指令执行单元中,相应的负载微指令; 并在随后的时钟周期从缓存读取从读取地址读取数据。

    IMPLEMENTING A HOT COHERENCY STATE TO A CACHE COHERENCY PROTOCOL IN A SYMMETRIC MULTI-PROCESSOR ENVIRONMENT
    13.
    发明申请
    IMPLEMENTING A HOT COHERENCY STATE TO A CACHE COHERENCY PROTOCOL IN A SYMMETRIC MULTI-PROCESSOR ENVIRONMENT 审中-公开
    将高温状态实施为对称多处理器环境中的高速缓存协议

    公开(公告)号:US20080140942A1

    公开(公告)日:2008-06-12

    申请号:US11609510

    申请日:2006-12-12

    IPC分类号: G06F12/08

    摘要: A computer system is provided that has a main memory and a plurality of processor agents each having a last level cache and a hot cache. Each processor agent is configured to store cache lines in the last level cache and the hot cache. The hot cache is configured to store cache lines in the hot coherency state. Cache lines in the hot coherency state are cache lines that have been read and modified. The hot cache is smaller in size than the last level cache to facilitate fast access to the cache lines in the hot coherency state in response to a future request to read with intent to modify. A bus connects each of the plurality of processor agents to the main memory.

    摘要翻译: 提供了一种计算机系统,其具有主存储器和多个处理器代理,每个处理器代理具有最后级缓存和热缓存。 每个处理器代理被配置为在最后一级缓存和热缓存中存储高速缓存行。 热缓存被配置为存储处于热相关性状态的高速缓存行。 热相关性状态下的高速缓存行是已读取和修改的高速缓存行。 热缓存的大小比最后一级缓存小,以便在快速访问高度一致性状态的缓存行时响应未来的意图读取的请求进行修改。 总线将多个处理器代理中的每一个连接到主存储器。

    Administering An Access Conflict In A Computer Memory Cache
    14.
    发明申请
    Administering An Access Conflict In A Computer Memory Cache 审中-公开
    管理计算机内存缓存中的访问冲突

    公开(公告)号:US20080082755A1

    公开(公告)日:2008-04-03

    申请号:US11536798

    申请日:2006-09-29

    IPC分类号: G06F13/28

    CPC分类号: G06F12/0855

    摘要: Administering an access conflict in a computer memory cache, including receiving in a memory cache controller a write address and write data from a store memory instruction execution unit of a superscalar computer processor and a read address for read data from a load memory instruction execution unit of the superscalar computer processor, for the write data to be written to and the read data to be read from a same cache line in the computer memory cache simultaneously on a current clock cycle; storing by the memory cache controller the write data in the same cache line on the current clock cycle; stalling, by the memory cache controller in the load memory instruction execution unit, a corresponding load microinstruction; and reading by the memory cache controller from the computer memory cache on a subsequent clock cycle read data from the read address.

    摘要翻译: 管理计算机存储器高速缓存中的访问冲突,包括在存储器高速缓存控制器中接收来自超标量计算机处理器的存储存储器指令执行单元的写入地址和写入数据,以及用于读取数据的读取地址,以从读取数据的加载存储器指令执行单元 超标量计算机处理器,用于在当前时钟周期内同时对要写入的写数据和从计算机存储器高速缓存行中的同一高速缓存线读取的读数据; 由存储器缓存控制器在当前时钟周期内将写数据存储在同一高速缓存行中; 停止,由存储器缓存控制器在加载存储器指令执行单元中,相应的负载微指令; 并在随后的时钟周期内通过存储器缓存控制器从计算机存储器缓存读取从读取地址读取数据。

    Data processing workload administration in a cloud computing environment
    15.
    发明授权
    Data processing workload administration in a cloud computing environment 有权
    云计算环境中的数据处理工作量管理

    公开(公告)号:US08656019B2

    公开(公告)日:2014-02-18

    申请号:US12640078

    申请日:2009-12-17

    IPC分类号: G06F15/173

    CPC分类号: G06F9/5088

    摘要: Data processing workload administration in a cloud computing environment including distributing data processing jobs among a plurality of clouds, each cloud comprising a network-based, distributed data processing system that provides one or more cloud computing services; deploying, by a job placement engine in each cloud according to the workload execution policy onto servers in each cloud, the data processing jobs distributed to each cloud; determining, by each job placement engine during execution of each data processing job, whether workload execution policy for each deployed job continues to be met by computing resources within the cloud where each job is deployed; and advising, by each job placement engine, the workload policy manager when workload execution policy for a particular job cannot continue to be met by computing resources within the cloud where the particular job is deployed.

    摘要翻译: 云计算环境中的数据处理工作量管理,包括在多个云中分配数据处理作业,每个云包括提供一个或多个云计算服务的基于网络的分布式数据处理系统; 将每个云中的作业布置引擎根据工作负载执行策略部署到每个云中的服务器上,将数据处理作业分发到每个云; 在执行每个数据处理作业期间,由每个作业布置引擎确定每个部署作业的工作负载执行策略是否继续通过计算云中每个作业部署的资源来满足; 并且通过计算特定作业所在的云中的资源,不能继续满足特定作业的工作负载策略管理器时,每个作业布置引擎将为工作负载策略管理员提供建议。

    Structure for dynamic initial cache line coherency state assignment in multi-processor systems
    16.
    发明授权
    Structure for dynamic initial cache line coherency state assignment in multi-processor systems 有权
    多处理器系统中动态初始高速缓存行一致性状态分配的结构

    公开(公告)号:US08131943B2

    公开(公告)日:2012-03-06

    申请号:US12114788

    申请日:2008-05-04

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0822

    摘要: A design structure embodied in a machine readable storage medium for designing, manufacturing, and testing a system for providing lines of data from shared resources to caching agents are provided. The system provides for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.

    摘要翻译: 提供了一种体现在用于从共享资源到高速缓存代理提供数据线的系统的设计,制造和测试的机器可读存储介质中的设计结构。 该系统提供从缓存代理接收存储在共享资源中的一行数据的请求,将多个相关性状态中的一个分配为数据行的初始一致性状态,多个相关性状态中的每一个可分配 作为数据行的初始相关性状态,并且以分配给数据行的初始一致性状态向高速缓存代理提供数据行。

    Structure for shared cache eviction
    17.
    发明授权
    Structure for shared cache eviction 有权
    共享缓存驱逐的结构

    公开(公告)号:US08065487B2

    公开(公告)日:2011-11-22

    申请号:US12113306

    申请日:2008-05-01

    IPC分类号: G06F12/00

    摘要: A design structure embodied in a machine readable storage medium for of designing, manufacturing, and/or testing for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores is provided. The design structure includes means for receiving from a processor core a request to load a cache line in the shared cache; means for determining whether the shared cache is full; means for determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and means for evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.

    摘要翻译: 提供了一种体现在用于在具有由多个处理器核共享的高速缓存的多核处理环境中的共享高速缓存驱逐的设计,制造和/或测试的机器可读存储介质中的设计结构。 该设计结构包括用于从处理器核心接收在共享高速缓存中加载高速缓存行的请求的装置; 用于确定共享缓存是否已满的装置; 用于确定在所述共享高速缓存中是否存储高速缓存行的装置,如果所述共享高速缓存已满,则所述共享高速缓存已被所述共享高速缓存的所有处理器核心所访问的高速缓存行存储; 以及如果高速缓存行存储在已被共享高速缓存的少于所有处理器核心的共享缓存中存储的共享高速缓存中,则驱逐已经被少于所有共享高速缓存的处理器核心访问的高速缓存行的装置。

    System and method for dynamically selecting the fetch path of data for improving processor performance
    18.
    发明授权
    System and method for dynamically selecting the fetch path of data for improving processor performance 失效
    用于动态选择数据获取路径以提高处理器性能的系统和方法

    公开(公告)号:US07865669B2

    公开(公告)日:2011-01-04

    申请号:US11832803

    申请日:2007-08-02

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0888

    摘要: A system and method for dynamically selecting the data fetch path for improving the performance of the system improves data access latency by dynamically adjusting data fetch paths based on application data fetch characteristics. The application data fetch characteristics are determined through the use of a hit/miss tracker. It reduces data access latency for applications that have a low data reuse rate (streaming audio, video, multimedia, games, etc.) which will improve overall application performance. It is dynamic in a sense that at any point in time when the cache hit rate becomes reasonable (defined parameter), the normal cache lookup operations will resume. The system utilizes a hit/miss tracker which tracks the hits/misses against a cache and, if the miss rate surpasses a prespecified rate or matches an application profile, the hit/miss tracker causes the cache to be bypassed and the data is pulled from main memory or another cache thereby improving overall application performance.

    摘要翻译: 用于动态选择用于提高系统性能的数据获取路径的系统和方法通过基于应用数据获取特征动态调整数据获取路径来改善数据访问等待时间。 通过使用命中/未命中跟踪器来确定应用数据提取特性。 它可以降低具有低数据重用率(流音频,视频,多媒体,游戏等)的应用程序的数据访问延迟,从而提高整体应用性能。 在某种意义上说,它是动态的,即在缓存命中率变得合理(定义参数)的任何时候,正常的缓存查找操作将恢复。 该系统利用跟踪高速缓存的命中/未命中的命中/未命中跟踪器,并且如果未命中率超过预定速率或匹配应用程序配置文件,命中/未命中跟踪器将导致高速缓存被绕过并且数据被从 主存储器或另一高速缓存,从而提高整体应用性能。

    APPARATUS, SYSTEM, AND METHOD FOR CACHE COHERENCY ELIMINATION
    19.
    发明申请
    APPARATUS, SYSTEM, AND METHOD FOR CACHE COHERENCY ELIMINATION 审中-公开
    用于高速缓存消除的装置,系统和方法

    公开(公告)号:US20100332763A1

    公开(公告)日:2010-12-30

    申请号:US12495176

    申请日:2009-06-30

    IPC分类号: G06F12/08 G06F12/00

    摘要: An apparatus, system, and method are disclosed for improving cache coherency processing. The method includes determining that a first processor in a multiprocessor system receives a cache miss. The method also includes determining whether an application associated with the cache miss is running on a single processor core and/or whether the application is running on two or more processor cores that share a cache. A cache coherency algorithm is executed in response to determining that the application associated with the cache miss is running on two or more processor cores that do not share a cache, and is skipped in response to determining that the application associated with the cache miss is running on one of a single processor core and two or more processor cores that share a cache.

    摘要翻译: 公开了用于改进高速缓存一致性处理的装置,系统和方法。 该方法包括确定多处理器系统中的第一处理器接收高速缓存未命中。 该方法还包括确定与高速缓存未命中相关联的应用是否在单个处理器核上运行和/或确定应用是否在共享高速缓存的两个或多个处理器核上运行。 响应于确定与高速缓存未命中相关联的应用在不共享高速缓存的两个或多个处理器核上运行而执行高速缓存一致性算法,并且响应于确定与高速缓存未命中关联的应用正在运行而被跳过 在单个处理器核心和共享缓存的两个或多个处理器核心之一上。

    Shared cache eviction
    20.
    发明授权
    Shared cache eviction 有权
    共享缓存驱逐

    公开(公告)号:US07840759B2

    公开(公告)日:2010-11-23

    申请号:US11689265

    申请日:2007-03-21

    IPC分类号: G06F12/00

    CPC分类号: G06F12/084 G06F12/121

    摘要: Methods and systems for shared cache eviction in a multi-core processing environment having a cache shared by a plurality of processor cores are provided. Embodiments include receiving from a processor core a request to load a cache line in the shared cache; determining whether the shared cache is full; determining whether a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache if the shared cache is full; and evicting a cache line that has been accessed by fewer than all the processor cores sharing the cache if a cache line is stored in the shared cache that has been accessed by fewer than all the processor cores sharing the cache.

    摘要翻译: 提供了具有由多个处理器核共享的高速缓存的多核处理环境中的用于共享缓存驱逐的方法和系统。 实施例包括从处理器核心接收在共享高速缓存中加载高速缓存行的请求; 确定共享缓存是否已满; 如果共享高速缓存已满,则确定高速缓存行是否存储在共享高速缓存中,所述高速缓存行已经被共享高速缓存的少于所有处理器核的访问; 并且如果高速缓存行存储在共享高速缓存中的共享高速缓存中的共享高速缓存中的少于所有处理器内核的访问,则驱逐已经被少于所有共享高速缓存的处理器核心访问的高速缓存行。