Protecting ownership transfer with non-uniform protection windows
    51.
    发明授权
    Protecting ownership transfer with non-uniform protection windows 有权
    用不均匀的保护窗保护所有权转让

    公开(公告)号:US08205024B2

    公开(公告)日:2012-06-19

    申请号:US11560619

    申请日:2006-11-16

    IPC分类号: G06F3/00 G06F13/00

    CPC分类号: G06F15/173

    摘要: In a data processing system, a plurality of agents communicate operations therebetween. Each operation includes a request and a combined response representing a system-wide response to the request. Latencies of requests and combined responses between the plurality of agents are observed. Each of the plurality of agents is configured with a respective duration of a protection window extension by reference to the observed latencies. Each protection window extension is a period following receipt of a combined response during winch an associated one of the plurality of agents protects transfer of coherency ownership of a data granule between agents. The plurality of agents employing protection window extensions in accordance with the configuration, and at least two of the agents have protection window extensions of differing durations.

    摘要翻译: 在数据处理系统中,多个代理之间进行通信。 每个操作包括一个请求和组合的响应,代表对该请求的全系统响应。 观察到请求的延迟和多个代理之间的组合响应。 通过参考所观察到的延迟,多个代理中的每个被配置有保护窗口扩展的相应持续时间。 每个保护窗口扩展是在绞盘期间接收到组合响应之后的周期,多个代理之一相关联的一个代理保护代理之间的数据粒子的一致性所有权的传送。 多个代理根据配置​​使用保护窗口扩展,并且至少两个代理具有不同持续时间的保护窗口扩展。

    FORMATION OF AN EXCLUSIVE OWNERSHIP COHERENCE STATE IN A LOWER LEVEL CACHE
    52.
    发明申请
    FORMATION OF AN EXCLUSIVE OWNERSHIP COHERENCE STATE IN A LOWER LEVEL CACHE 有权
    在较低级别的高速缓存中形成独家所有权的相关状态

    公开(公告)号:US20110161588A1

    公开(公告)日:2011-06-30

    申请号:US12649725

    申请日:2009-12-30

    IPC分类号: G06F12/08 G06F12/00

    摘要: In response to a memory access request of a processor core that targets a target cache line, the lower level cache of a vertical cache hierarchy associated with the processor core supplies a copy of the target cache line to an upper level cache in the vertical cache hierarchy and retains a copy in a shared coherence state. The upper level cache holds the copy of the target cache line in a private shared ownership coherence state indicating that each cached copy of the target memory block is cached within the vertical cache hierarchy associated with the processor core. In response to the upper level cache signaling replacement of the copy of the target cache line in the private shared ownership coherence state, the lower level cache updates its copy of the target cache line to the exclusive ownership coherence state without coherency messaging with other vertical cache hierarchies.

    摘要翻译: 响应于针对目标高速缓存线的处理器核心的存储器访问请求,与处理器核心相关联的垂直高速缓存层级的较低级缓存将目标高速缓存行的副本提供给垂直高速缓存层级中的高级缓存 并保留共享一致状态的副本。 高级缓存将目标高速缓存行的副本保存在私有共享所有权一致状态中,指示目标存储器块的每个高速缓存副本被缓存在与处理器核心相关联的垂直高速缓存层级内。 响应于在私有共享所有权相干状态下高级缓存信令替换目标高速缓存行的副本,下级缓存将其目标高速缓存行的副本更新为独占所有权相干状态,而不与其他垂直高速缓存的一致性消息传递 层次结构。

    Aggregate Data Processing System Having Multiple Overlapping Synthetic Computers
    53.
    发明申请
    Aggregate Data Processing System Having Multiple Overlapping Synthetic Computers 有权
    具有多重重合成计算机的综合数据处理系统

    公开(公告)号:US20110153943A1

    公开(公告)日:2011-06-23

    申请号:US12643800

    申请日:2009-12-21

    IPC分类号: G06F12/00 G06F12/14 G06F12/08

    摘要: A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.

    摘要翻译: 第一SMP计算机具有第一和第二处理单元和第一系统存储器池,第二SMP计算机具有第三和第四处理单元和第二系统存储器池,并且第三SMP计算机具有至少第五和第六处理单元,第三SMP计算机具有至少第五和第六处理单元, 第四和第五系统内存池。 第四系统存储器池对于第三,第四和第六处理单元是不可访问的,并且可访问至少第二和第五处理单元,并且第五系统存储器池对于第一,第二和第六处理单元是不可访问的,并且至少可访问 第四和第五处理单元。 第一互连耦合第二处理单元,用于对第四系统存储池进行加载存储相关的有序访问,并且第二互连耦合第四处理单元,用于加载存储相关的有序访问到第五系统存储池。

    Updating Partial Cache Lines in a Data Processing System
    54.
    发明申请
    Updating Partial Cache Lines in a Data Processing System 有权
    更新数据处理系统中的部分缓存行

    公开(公告)号:US20100268884A1

    公开(公告)日:2010-10-21

    申请号:US12424434

    申请日:2009-04-15

    IPC分类号: G06F12/08 G06F12/00

    摘要: A processing unit for a data processing system includes a processor core having one or more execution units for processing instructions and a register file for storing data accessed in processing of the instructions. The processing unit also includes a multi-level cache hierarchy coupled to and supporting the processor core. The multi-level cache hierarchy includes at least one upper level of cache memory having a lower access latency and at least one lower level of cache memory having a higher access latency. The lower level of cache memory, responsive to receipt of a memory access request that hits only a partial cache line in the lower level cache memory, sources the partial cache line to the at least one upper level cache memory to service the memory access request. The at least one upper level cache memory services the memory access request without caching the partial cache line.

    摘要翻译: 用于数据处理系统的处理单元包括具有一个或多个用于处理指令的执行单元的处理器核心和用于存储在指令处理中访问的数据的寄存器文件。 处理单元还包括耦合到并支持处理器核的多级高速缓存层级。 多级高速缓存层级包括具有较低访问延迟的至少一个高级缓存存储器和具有较高访问延迟的至少一个较低级别的高速缓存存储器。 响应于仅接收低级高速缓冲存储器中的部分高速缓存行的存储器访问请求的响应,较低级别的高速缓存存储器将部分高速缓存行源送到至少一个上级高速缓冲存储器以服务存储器访问请求。 至少一个上级缓存存储器服务于存储器访问请求,而不缓存部分高速缓存行。

    Reducing number of rejected snoop requests by extending time to respond to snoop request
    55.
    发明授权
    Reducing number of rejected snoop requests by extending time to respond to snoop request 失效
    通过延长响应窥探请求的时间来减少被拒绝的窥探请求数

    公开(公告)号:US07818511B2

    公开(公告)日:2010-10-19

    申请号:US11847941

    申请日:2007-08-30

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。

    Reducing number of rejected snoop requests by extending time to respond to snoop request
    56.
    发明授权
    Reducing number of rejected snoop requests by extending time to respond to snoop request 失效
    通过延长响应窥探请求的时间来减少被拒绝的窥探请求数

    公开(公告)号:US07484046B2

    公开(公告)日:2009-01-27

    申请号:US11950717

    申请日:2007-12-05

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 窥探请求的诸如地址的信息被存储在失速/重新排序单元的队列中。 停止/重新排序单元将窥探请求转发到也从处理器接收请求的选择器。 仲裁机制选择来自处理器的窥探请求或请求。 如果侦听请求被仲裁机制拒绝,关于窥探请求的信息(例如地址)可以被保留在停止/重新排序单元中。 请求可能会稍后重新发送到选择器。 该过程可以重复直到“n”个时钟周期。 通过提供窥探请求仲裁机制接受的额外机会(n个时钟周期),最终可能会拒绝更少的侦听请求。

    Data processing system, cache system and method for passively scrubbing a domain indication
    57.
    发明授权
    Data processing system, cache system and method for passively scrubbing a domain indication 失效
    数据处理系统,缓存系统和被动清理域指示的方法

    公开(公告)号:US07478201B2

    公开(公告)日:2009-01-13

    申请号:US11136652

    申请日:2005-05-24

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: Scrubbing logic in a local coherency domain issues a domain query request to at least one cache hierarchy in a remote coherency domain. The domain query request is a non-destructive probe of a coherency state associated with a target memory block by the at least one cache hierarchy. A coherency response to the domain query request is received. In response to the coherency response indicating that the target memory block is not cached in the remote coherency domain, a domain indication in the local coherency domain is updated to indicate that the target memory block is cached, if at all, only within the local coherency domain.

    摘要翻译: 本地一致性域中的擦除逻辑向远程一致性域中的至少一个缓存层次结构发出域查询请求。 域查询请求是由至少一个高速缓存层次结构与目标存储器块相关联的一致性状态的非破坏性探测。 接收到域查询请求的一致性响应。 响应于指示目标存储器块未被缓存在远程一致性域中的相关性响应,本地一致性域中的域指示被更新以指示目标存储器块被缓存,如果完全只在本地一致性内 域。

    Data processing system, processor and method of data processing in which local memory access requests are serviced by state machines with differing functionality
    58.
    发明授权
    Data processing system, processor and method of data processing in which local memory access requests are serviced by state machines with differing functionality 失效
    数据处理系统,处理器和数据处理方法,其中本地存储器访问请求由具有不同功能的状态机服务

    公开(公告)号:US07447845B2

    公开(公告)日:2008-11-04

    申请号:US11457333

    申请日:2006-07-13

    IPC分类号: G06F12/00

    摘要: A data processing system includes a local processor core and a cache memory coupled to the local processor core. The cache memory includes a data array, a directory of contents of the data array, at least one snoop machine that services memory access requests of a remote processor core, and multiple state machines that service memory access requests of the local processor core. The multiple state machines include a first state machine that has a first set of memory access requests of the local processor core that it is capable of servicing and a second state machine that has a different second set of memory access requests of the local processor core that it is capable of servicing.

    摘要翻译: 数据处理系统包括本地处理器核心和耦合到本地处理器核心的高速缓存存储器。 高速缓冲存储器包括数据阵列,数据阵列的内容目录,至少一个服务于远程处理器核的存储器访问请求的窥探机器,以及服务于本地处理器核心的存储器访问请求的多个状态机。 多状态机包括第一状态机,其具有能够服务的本地处理器核心的第一组存储器访问请求;以及第二状态机,其具有本地处理器核心的不同的第二组存储器访问请求, 它能够维修。

    Data Processing System, Processor and Method of Data Processing in which Local Memory Access Requests are Serviced on a Fixed Schedule
    59.
    发明申请
    Data Processing System, Processor and Method of Data Processing in which Local Memory Access Requests are Serviced on a Fixed Schedule 失效
    数据处理系统,处理器和数据处理方法,其中本地存储器访问请求在固定时间表上服务

    公开(公告)号:US20080016278A1

    公开(公告)日:2008-01-17

    申请号:US11457322

    申请日:2006-07-13

    IPC分类号: G06F12/00

    摘要: A processing unit includes a local processor core and a cache memory coupled to the local processor core. The cache memory includes a data array, a directory of contents of the data array. The cache memory further includes one or more state machines that service a first set of memory access requests, an arbiter that directs servicing of a second set of memory access requests by reference to the data array and the directory on a fixed schedule, address collision logic that protects memory access requests in the second set by detecting and signaling address conflicts between active memory access requests in the second set and subsequent memory access requests, and dispatch logic coupled to the address collision logic. The dispatch logic dispatches memory access requests in the first set to the one or more state machines for servicing and signals the arbiter to direct servicing of memory access requests in the second set according to the fixed schedule.

    摘要翻译: 处理单元包括本地处理器核心和耦合到本地处理器核心的高速缓存存储器。 高速缓冲存储器包括数据阵列,数据阵列的内容目录。 缓存存储器还包括服务于第一组存储器访问请求的一个或多个状态机,通过参考数据阵列和固定时间表上的目录来指导第二组存储器访问请求的服务的仲裁器,地址冲突逻辑 其通过检测和发出第二组中的活动存储器访问请求与后续存储器访问请求之间的地址冲突以及耦合到地址冲突逻辑的调度逻辑来保护第二组中的存储器访问请求。 调度逻辑将第一组中的存储器访问请求分派到一个或多个状态机用于服务,并且向仲裁器发出信号,以根据固定的时间表对第二组中的存储器访问请求进行直接服务。

    Selective cache-to-cache lateral castouts
    60.
    发明授权
    Selective cache-to-cache lateral castouts 有权
    选择性高速缓存到缓存横向转义

    公开(公告)号:US09189403B2

    公开(公告)日:2015-11-17

    申请号:US12650018

    申请日:2009-12-30

    IPC分类号: G06F12/00 G06F12/08 G06F12/12

    CPC分类号: G06F12/0811 G06F12/12

    摘要: A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.

    摘要翻译: 数据处理系统包括第一和第二处理单元和系统存储器。 第一处理单元具有第一上层和第一下层高速缓存,第二处理单元具有第二上层和下层高速缓存。 响应于数据请求,选择要从第一较低级高速缓存丢弃的受害者高速缓存行,并且第一较低级高速缓存选择在执行到第二低级高速缓存的受害者高速缓存行的横向流出(LCO) 基于与受害者高速缓存行相关联的置信指示,将受害者缓存行的丢弃发送到系统存储器。 响应于选择LCO,第一处理单元在互连结构上发布LCO命令,并从第一低级缓存中移除受害者高速缓存行,并且第二下级缓存保存受害缓存行。