Data cache block deallocate requests
    3.
    发明授权
    Data cache block deallocate requests 有权
    数据缓存块取消分配请求

    公开(公告)号:US08856455B2

    公开(公告)日:2014-10-07

    申请号:US13433022

    申请日:2012-03-28

    IPC分类号: G06F12/02 G06F12/08

    摘要: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.

    摘要翻译: 数据处理系统包括由上层和下层高速缓存支持的处理器核心。 响应于在处理器核心中执行取消分配指令,从处理器核心向下级高速缓存发送解除分配请求,所述释放请求指定与目标高速缓存行相关联的目标地址。 响应于在较低级别高速缓存处接收到解除分配请求,确定目标地址是否在较低级别高速缓存中。 为了响应于确定目标地址在较低级别高速缓存中的命中,目标高速缓存行被保留在较低级别高速缓存的数据阵列中,并且更新下级高速缓存的目录中的替换顺序字段,使得目标高速缓存 线路可能会响应于后续的高速缓存未命中而从较低级别的缓存中逐出。

    Data cache block deallocate requests in a multi-level cache hierarchy
    4.
    发明授权
    Data cache block deallocate requests in a multi-level cache hierarchy 有权
    数据缓存块在多级缓存层次结构中释放请求

    公开(公告)号:US08874852B2

    公开(公告)日:2014-10-28

    申请号:US13433048

    申请日:2012-03-28

    IPC分类号: G06F12/08

    摘要: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.

    摘要翻译: 响应于执行取消分配指令,将指定目标高速缓存行的目标地址的解除分配请求从处理器核发送到较低级高速缓存。 作为响应,确定目标地址是否在较低级别高速缓存中。 如果是这样,则目标高速缓存行被保留在较低级高速缓存的数据阵列中,并且更新较低级高速缓存的替换顺序字段,使得目标高速缓存行更可能响应于后续高速缓存未命中而被驱逐 在包含目标缓存行的同余类中。 响应于随后的高速缓存未命中,目标高速缓存行被推出到较低级缓存,指示目标高速缓存行是处理器核心的先前释放请求的目标。

    DATA CACHE BLOCK DEALLOCATE REQUESTS
    5.
    发明申请
    DATA CACHE BLOCK DEALLOCATE REQUESTS 有权
    数据缓存块解析请求

    公开(公告)号:US20130262777A1

    公开(公告)日:2013-10-03

    申请号:US13433022

    申请日:2012-03-28

    IPC分类号: G06F12/12

    摘要: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.

    摘要翻译: 数据处理系统包括由上层和下层高速缓存支持的处理器核心。 响应于在处理器核心中执行取消分配指令,从处理器核心向下级高速缓存发送解除分配请求,所述释放请求指定与目标高速缓存行相关联的目标地址。 响应于在较低级别高速缓存处接收到解除分配请求,确定目标地址是否在较低级别高速缓存中。 为了响应于确定目标地址在较低级别高速缓存中的命中,目标高速缓存行被保留在较低级别高速缓存的数据阵列中,并且更新下级高速缓存的目录中的替换顺序字段,使得目标高速缓存 线路可能会响应于后续的高速缓存未命中而从较低级别的缓存中逐出。

    DATA CACHE BLOCK DEALLOCATE REQUESTS IN A MULTI-LEVEL CACHE HIERARCHY
    6.
    发明申请
    DATA CACHE BLOCK DEALLOCATE REQUESTS IN A MULTI-LEVEL CACHE HIERARCHY 有权
    数据缓存区块在多级高速缓存中调用请求

    公开(公告)号:US20130262778A1

    公开(公告)日:2013-10-03

    申请号:US13433048

    申请日:2012-03-28

    IPC分类号: G06F12/12

    摘要: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.

    摘要翻译: 响应于执行取消分配指令,将指定目标高速缓存行的目标地址的解除分配请求从处理器核发送到较低级高速缓存。 作为响应,确定目标地址是否在较低级别高速缓存中。 如果是这样,则目标高速缓存行被保留在较低级高速缓存的数据阵列中,并且更新较低级高速缓存的替换顺序字段,使得目标高速缓存行更可能响应于后续高速缓存未命中而被驱逐 在包含目标缓存行的同余类中。 响应于随后的高速缓存未命中,目标高速缓存行被推出到较低级缓存,指示目标高速缓存行是处理器核心的先前释放请求的目标。

    Lateral Castout (LCO) Of Victim Cache Line In Data-Invalid State
    7.
    发明申请
    Lateral Castout (LCO) Of Victim Cache Line In Data-Invalid State 有权
    受害者高速缓存行在数据无效状态下的横向延伸(LCO)

    公开(公告)号:US20100235584A1

    公开(公告)日:2010-09-16

    申请号:US12402025

    申请日:2009-03-11

    IPC分类号: G06F12/08 G06F12/00

    摘要: A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state.

    摘要翻译: 选择具有数据无效相干状态的受害者缓存线,用于从第一处理单元的第一较低级缓存进行舍入。 第一处理单元在互连结构上发出用于标识要从第一较低级高速缓存丢弃的受害缓存行的横向聚合(LCO)命令,指示数据无效相干状态,并指示较低级别高速缓存是预期目的地 的受害者缓存行。 响应于指示LCO命令成功的LCO命令的一致性响应,从第一低级缓存中移除受害者高速缓存行并将其保存在数据无效一致状态中的第二处理单元的第二较低级高速缓存中。

    Lateral castout (LCO) of victim cache line in data-invalid state
    8.
    发明授权
    Lateral castout (LCO) of victim cache line in data-invalid state 有权
    受害者高速缓存行在数据无效状态的横向失效(LCO)

    公开(公告)号:US08949540B2

    公开(公告)日:2015-02-03

    申请号:US12402025

    申请日:2009-03-11

    IPC分类号: G06F12/02 G06F12/08 G06F12/12

    摘要: A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state.

    摘要翻译: 选择具有数据无效相干状态的受害者缓存线,用于从第一处理单元的第一较低级缓存进行舍入。 第一处理单元在互连结构上发出用于标识要从第一较低级高速缓存丢弃的受害缓存行的横向聚合(LCO)命令,指示数据无效相干状态,并指示较低级别高速缓存是预期目的地 的受害者缓存行。 响应于指示LCO命令成功的LCO命令的一致性响应,从第一低级缓存中移除受害者高速缓存行并将其保存在数据无效一致状态中的第二处理单元的第二较低级高速缓存中。

    REDUCING NUMBER OF REJECTED SNOOP REQUESTS BY EXTENDING TIME TO RESPOND TO SNOOP REQUEST
    9.
    发明申请
    REDUCING NUMBER OF REJECTED SNOOP REQUESTS BY EXTENDING TIME TO RESPOND TO SNOOP REQUEST 有权
    通过延长时间减少违反SNOOP要求的数量以应对SNOOP要求

    公开(公告)号:US20080201533A1

    公开(公告)日:2008-08-21

    申请号:US12114790

    申请日:2008-05-04

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 如果失速/重新排序单元未满,则在失速/重新排序单元中的锁存器管线中的第一可用锁存器中输入进入的窥探请求。 输入的窥探请求在进入管道中的底部闩锁时被调度到选择器。 失速/重新排序单元不知道发送后发送的几个时钟周期是否由仲裁机制接受发送的窥探请求。 调度窥探请求的副本在调度窥探请求时被存储在第一单元中的锁存器的溢出管道中的顶部锁存器中。 通过维护关于窥探请求的信息,可以在被发送的窥探请求被拒绝的情况下再次发送到选择器的窥探请求,从而增加窥探请求将最终被接受的机会。

    Reducing number of rejected snoop requests by extending time to respond to snoop request
    10.
    发明授权
    Reducing number of rejected snoop requests by extending time to respond to snoop request 有权
    通过延长响应窥探请求的时间来减少被拒绝的窥探请求数

    公开(公告)号:US07386682B2

    公开(公告)日:2008-06-10

    申请号:US11056764

    申请日:2005-02-11

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted.

    摘要翻译: 用于减少拒绝的窥探请求数量的缓存,系统和方法。 如果失速/重新排序单元未满,则在失速/重新排序单元中的锁存器管线中的第一可用锁存器中输入进入的窥探请求。 输入的窥探请求在进入管道中的底部闩锁时被调度到选择器。 失速/重新排序单元不知道发送后发送的几个时钟周期是否由仲裁机制接受发送的窥探请求。 调度窥探请求的副本在调度窥探请求时被存储在第一单元中的锁存器的溢出管道中的顶部锁存器中。 通过维护关于窥探请求的信息,可以在被发送的窥探请求被拒绝的情况下再次发送到选择器的窥探请求,从而增加窥探请求将最终被接受的机会。