Super-coherent data mechanisms for shared caches in a multiprocessing system
    91.
    发明授权
    Super-coherent data mechanisms for shared caches in a multiprocessing system 有权
    多处理系统中共享缓存的超连贯数据机制

    公开(公告)号:US06658539B2

    公开(公告)日:2003-12-02

    申请号:US09978353

    申请日:2001-10-16

    IPC分类号: G06F1200

    CPC分类号: G06F12/0831 G06F12/084

    摘要: A method for improving performance of a multiprocessor data processing system having processor groups with shared caches. When a processor within a processor group that shares a cache snoops a modification to a shared cache line in a cache of another processor that is not within the processor group, the coherency state of the shared cache line within the first cache is set to a first coherency state that indicates that the cache line has been modified by a processor not within the processor group and that the cache line has not yet been updated within the group's cache. When a request for the cache line is later issued by a processor, the request is issued to the system bus or interconnect. If a received response to the request indicates that the processor should utilize super-coherent data, the coherency state of the cache line is set to a processor-specific super coherency state. This state indicates that subsequent requests for the cache line by the first processor should be provided said super-coherent data, while a subsequent request for the cache line by a next processor in the processor group that has not yet issued a request for the cache line on the system bus, may still go to the system bus to request the cache line. The individualized, processor-specific super coherency states are individually set but are usually changed to another coherency state (e.g., Modified or Invalid) as a group.

    摘要翻译: 一种用于改善具有处理器组与共享高速缓存的多处理器数据处理系统的性能的方法。 当共享缓存的处理器组内的处理器窥探在处理器组内的另一处理器的高速缓存中的共享高速缓存线的修改时,第一高速缓存内的共享高速缓存行的一致性状态被设置为第一 指示高速缓存行已被处理器组内的处理器修改并且高速缓存行尚未在组的高速缓存内更新的一致性状态。 当稍后由处理器发出对高速缓存行的请求时,该请求被发布到系统总线或互连。 如果对该请求的接收到的响应指示处理器应该使用超相干数据,则高速缓存行的一致性状态被设置为处理器特定的超一致性状态。 该状态指示应该为所述超相干数据提供由第一处理器对高速缓存行的后续请求,而处理器组中尚未发出对高速缓存行请求的下一个处理器对高速缓存行的后续请求 在系统总线上,仍然可以去系统总线请求缓存行。 个性化的处理器特定的超一致性状态是单独设置的,但是通常作为一组更改为另一个一致性状态(例如,修改或无效)。

    Storage access authorization controls in a computer system using dynamic
translation of large addresses
    92.
    发明授权
    Storage access authorization controls in a computer system using dynamic translation of large addresses 失效
    使用大地址动态转换的计算机系统中的存储访问授权控制

    公开(公告)号:US5577231A

    公开(公告)日:1996-11-19

    申请号:US349771

    申请日:1994-12-06

    IPC分类号: G06F9/455

    CPC分类号: G06F9/45537

    摘要: A method of using the DAT mechanism in a computer processor to extend both: 1) the native storage access authorization architecture of the processor, and 2) to enable the processor to execute programs designed to operate under different storage access architectures. An executing program (called a source program) uses "source effective addresses" (source EAs) for locating its instructions and storage operands while executing on the processor (called the target processor).

    摘要翻译: 一种在计算机处理器中使用DAT机制来扩展以下两种方法:1)处理器的本地存储访问授权架构,以及2)使处理器能够执行设计为在不同存储访问架构下运行的程序。 执行程序(称为源程序)在处理器(称为目标处理器)上执行时,使用“源有效地址”(源EA)来定位其指令和存储操作数。

    Selective cache-to-cache lateral castouts
    93.
    发明授权
    Selective cache-to-cache lateral castouts 有权
    选择性高速缓存到缓存横向转义

    公开(公告)号:US09189403B2

    公开(公告)日:2015-11-17

    申请号:US12650018

    申请日:2009-12-30

    IPC分类号: G06F12/00 G06F12/08 G06F12/12

    CPC分类号: G06F12/0811 G06F12/12

    摘要: A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.

    摘要翻译: 数据处理系统包括第一和第二处理单元和系统存储器。 第一处理单元具有第一上层和第一下层高速缓存,第二处理单元具有第二上层和下层高速缓存。 响应于数据请求,选择要从第一较低级高速缓存丢弃的受害者高速缓存行,并且第一较低级高速缓存选择在执行到第二低级高速缓存的受害者高速缓存行的横向流出(LCO) 基于与受害者高速缓存行相关联的置信指示,将受害者缓存行的丢弃发送到系统存储器。 响应于选择LCO,第一处理单元在互连结构上发布LCO命令,并从第一低级缓存中移除受害者高速缓存行,并且第二下级缓存保存受害缓存行。

    Data cache block deallocate requests in a multi-level cache hierarchy
    94.
    发明授权
    Data cache block deallocate requests in a multi-level cache hierarchy 有权
    数据缓存块在多级缓存层次结构中释放请求

    公开(公告)号:US08874852B2

    公开(公告)日:2014-10-28

    申请号:US13433048

    申请日:2012-03-28

    IPC分类号: G06F12/08

    摘要: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.

    摘要翻译: 响应于执行取消分配指令,将指定目标高速缓存行的目标地址的解除分配请求从处理器核发送到较低级高速缓存。 作为响应,确定目标地址是否在较低级别高速缓存中。 如果是这样,则目标高速缓存行被保留在较低级高速缓存的数据阵列中,并且更新较低级高速缓存的替换顺序字段,使得目标高速缓存行更可能响应于后续高速缓存未命中而被驱逐 在包含目标缓存行的同余类中。 响应于随后的高速缓存未命中,目标高速缓存行被推出到较低级缓存,指示目标高速缓存行是处理器核心的先前释放请求的目标。

    Coordinated writeback of dirty cachelines
    95.
    发明授权
    Coordinated writeback of dirty cachelines 有权
    脏缓存行的协调回写

    公开(公告)号:US08838901B2

    公开(公告)日:2014-09-16

    申请号:US12775510

    申请日:2010-05-07

    IPC分类号: G06F12/00 G06F12/08

    摘要: A data processing system includes a processor core and a cache memory hierarchy coupled to the processor core. The cache memory hierarchy includes at least one upper level cache and a lowest level cache. A memory controller is coupled to the lowest level cache and to a system memory and includes a physical write queue from which the memory controller writes data to the system memory. The memory controller initiates accesses to the lowest level cache to place into the physical write queue selected cachelines having spatial locality with data present in the physical write queue.

    摘要翻译: 数据处理系统包括处理器核心和耦合到处理器核心的高速缓存存储器层级。 高速缓存存储器层级包括至少一个上级高速缓存和最低级高速缓存。 存储器控制器耦合到最低级缓存和系统存储器,并且包括物理写队列,存储器控制器从该物理写队列将数据写入系统存储器。 存储器控制器启动对最低级高速缓存的访问以放置到物理写入队列中,所选择的高速缓存线具有空间局部性,其中数据存在于物理写入队列中。

    Aggregate data processing system having multiple overlapping synthetic computers
    96.
    发明授权
    Aggregate data processing system having multiple overlapping synthetic computers 有权
    具有多个重叠合成计算机的综合数据处理系统

    公开(公告)号:US08656128B2

    公开(公告)日:2014-02-18

    申请号:US13599856

    申请日:2012-08-30

    IPC分类号: G06F12/00

    摘要: A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.

    摘要翻译: 第一SMP计算机具有第一和第二处理单元和第一系统存储器池,第二SMP计算机具有第三和第四处理单元和第二系统存储器池,并且第三SMP计算机具有至少第五和第六处理单元,第三SMP计算机具有至少第五和第六处理单元, 第四和第五系统内存池。 第四系统存储器池对于第三,第四和第六处理单元是不可访问的,并且可访问至少第二和第五处理单元,并且第五系统存储器池对于第一,第二和第六处理单元是不可访问的,并且至少可访问 第四和第五处理单元。 第一互连耦合第二处理单元,用于对第四系统存储池进行加载存储相关的有序访问,并且第二互连耦合第四处理单元,用于加载存储相关的有序访问到第五系统存储池。

    DATA CACHE BLOCK DEALLOCATE REQUESTS
    97.
    发明申请
    DATA CACHE BLOCK DEALLOCATE REQUESTS 有权
    数据缓存块解析请求

    公开(公告)号:US20130262777A1

    公开(公告)日:2013-10-03

    申请号:US13433022

    申请日:2012-03-28

    IPC分类号: G06F12/12

    摘要: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.

    摘要翻译: 数据处理系统包括由上层和下层高速缓存支持的处理器核心。 响应于在处理器核心中执行取消分配指令,从处理器核心向下级高速缓存发送解除分配请求,所述释放请求指定与目标高速缓存行相关联的目标地址。 响应于在较低级别高速缓存处接收到解除分配请求,确定目标地址是否在较低级别高速缓存中。 为了响应于确定目标地址在较低级别高速缓存中的命中,目标高速缓存行被保留在较低级别高速缓存的数据阵列中,并且更新下级高速缓存的目录中的替换顺序字段,使得目标高速缓存 线路可能会响应于后续的高速缓存未命中而从较低级别的缓存中逐出。

    Memory coherence directory supporting remotely sourced requests of nodal scope
    98.
    发明授权
    Memory coherence directory supporting remotely sourced requests of nodal scope 失效
    内存一致性目录支持远程请求节点范围

    公开(公告)号:US08504779B2

    公开(公告)日:2013-08-06

    申请号:US13445010

    申请日:2012-04-12

    IPC分类号: G06F13/00 G06F13/28

    CPC分类号: G06F12/0817

    摘要: A data processing system includes at least a first through third processing nodes coupled by an interconnect fabric. The first processing node includes a master, a plurality of snoopers capable of participating in interconnect operations, and a node interface that receives a request of the master and transmits the request of the master to the second processing unit with a nodal scope of transmission limited to the second processing node. The second processing node includes a node interface having a directory. The node interface of the second processing node permits the request to proceed with the nodal scope of transmission if the directory does not indicate that a target memory block of the request is cached other than in the second processing node and prevents the request from succeeding if the directory indicates that the target memory block of the request is cached other than in the second processing node.

    摘要翻译: 数据处理系统至少包括通过互连结构耦合的第一至第三处理节点。 第一处理节点包括主机,能够参与互连操作的多个侦听器,以及接收主机请求的节点接口,并将主机的请求传送到第二处理单元,传送范围限于 第二处理节点。 第二处理节点包括具有目录的节点接口。 第二处理节点的节点接口允许请求继续进行节点传输范围,如果该目录没有指示该请求的目标存储器块不是在第二处理节点中被缓存,并且如果该请求成功 目录指示除第二处理节点之外的请求的目标存储块被缓存。

    Handling castout cache lines in a victim cache
    99.
    发明授权
    Handling castout cache lines in a victim cache 失效
    在受害者缓存中处理castout缓存行

    公开(公告)号:US08499124B2

    公开(公告)日:2013-07-30

    申请号:US12336048

    申请日:2008-12-16

    IPC分类号: G06F12/00

    摘要: A victim cache memory includes a cache array, a cache directory of contents of the cache array, and a cache controller that controls operation of the victim cache memory. The cache controller, responsive to receiving a castout command identifying a victim cache line castout from another cache memory, causes the victim cache line to be held in the cache array. If the other cache memory is a higher level cache in the cache hierarchy of the processor core, the cache controller marks the victim cache line in the cache directory so that it is less likely to be evicted by a replacement policy of the victim cache, and otherwise, marks the victim cache line in the cache directory so that it is more likely to be evicted by the replacement policy of the victim cache.

    摘要翻译: 受害者缓存存储器包括缓存阵列,高速缓存阵列的内容的高速缓存目录以及控制受害者缓存存储器的操作的高速缓存控制器。 高速缓存控制器响应于从另一高速缓冲存储器接收识别受害者高速缓存线路突发的丢弃命令,使受害者高速缓存行保持在高速缓存阵列中。 如果其他高速缓冲存储器是处理器核心的高速缓存层级中的较高级缓存,则高速缓存控制器将高速缓存目录中的受害者高速缓存行标记为不太可能被受害缓存的替换策略驱逐, 否则,将缓存目录中的受害者缓存行标记为更有可能被受害者缓存的替换策略驱逐。

    Synchronizing access to data in shared memory via upper level cache queuing
    100.
    发明授权
    Synchronizing access to data in shared memory via upper level cache queuing 失效
    通过高级缓存排队同步访问共享内存中的数据

    公开(公告)号:US08327074B2

    公开(公告)日:2012-12-04

    申请号:US13445080

    申请日:2012-04-12

    IPC分类号: G06F12/00

    摘要: A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state.

    摘要翻译: 处理单元包括具有确定存在或不存在预留的预约逻辑的存储下位缓存和包括存储通过上级缓存,指令执行单元,负载单元的处理器核心,该负载单元响应于 由指令执行单元通过执行装载预约指令而产生的加载备用操作的上级缓存暂时缓冲加载备用操作的加载目标地址,以及指示载入预约操作被绑定到 上级缓存中的值。 如果接收到与加载保留操作的加载目标地址冲突的存储修改操作,则处理器核心将该标志设置为特定状态,并且响应于执行存储条件指令,发送关联的存储 - 如果该标志被设置为特定状态,则向低级缓存进行条件操作,并显示故障指示。