Data caching on bridge following disconnect
    1.
    发明授权
    Data caching on bridge following disconnect 有权
    断开连接后,桥上的数据缓存

    公开(公告)号:US06973528B2

    公开(公告)日:2005-12-06

    申请号:US10153041

    申请日:2002-05-22

    IPC分类号: G06F12/08 G06F13/40 G06F13/00

    摘要: To prevent data performance impacts when dealing with target devices that can only transfer data for a limited number of bytes before disconnecting, the invention implements a short term data cache on the bridge. Using this feature, the bridge will cache additional data beyond a predetermined quantity of data following a disconnect with the requesting device. As such, the bridge may continue to prefetch additional data up to an amount specified by a prefetch read byte count and return the additional data should the requesting device request additional data resuming at the point of disconnect. However, the bridge will discard the additional data when at least one of the following occurs: a) the requesting device disconnects data transfer, and b) a further READ request that resumes at the point of disconnect is not received within a predetermined time.

    摘要翻译: 为了防止在断开连接之前只能传输有限数量字节的数据的目标设备的数据性能影响,本发明实现了桥上的短期数据高速缓存。 使用此功能,桥接器将在与请求设备断开连接之后缓存超出预定数量的数据的附加数据。 因此,桥接器可以继续预取附加数据,直到预取读字节计数指定的量,并且如果请求设备在断开点请求附加数据恢复请求返回附加数据。 然而,当发生以下至少一个时,桥将丢弃附加数据:a)请求设备断开数据传输,并且b)在预定时间内没有接收到在断开点恢复的另一个READ请求。

    Opaque memory region for I/O adapter transparent bridge
    2.
    发明授权
    Opaque memory region for I/O adapter transparent bridge 有权
    不透明内存区域用于I / O适配器透明桥

    公开(公告)号:US06968415B2

    公开(公告)日:2005-11-22

    申请号:US10113299

    申请日:2002-03-29

    IPC分类号: G06F13/36 G06F13/40

    CPC分类号: G06F13/4059

    摘要: An opaque memory region for a bridge of an I/O adapter. The opaque memory region is inaccessible to memory transactions which traverse the bridge either from a primary bus to secondary bus or secondary bus to primary bus. As a result, memory transactions which target the opaque memory region are ignored by the bridge, allowing for the same address to exist on both sides of the bridge with different data stored in each. The implementation of the opaque memory region provides a means to complete memory transactions within I/O adapter subsystem memory, hence, relieving host computer system memory resources. In addition, a number of I/O adapters can be used in a host computer system where the host and all the I/O devices use some of the same memory addresses.

    摘要翻译: 用于I / O适配器的桥的不透明的存储器区域。 不透明的存储器区域无法通过从主总线到次总线或辅助总线到主总线的桥接器的存储器事务。 因此,桥接器忽略了针对不透明存储器区域的内存事务,允许在每个存储不同数据的网桥两侧存在相同的地址。 不透明存储器区域的实现提供了在I / O适配器子系统存储器内完成存储器事务的手段,因此减轻了主机计算机系统存储器资源。 此外,主机和所有I / O设备使用一些相同的内存地址的主机系统中可以使用多个I / O适配器。

    Concurrent Refresh In Cache Memory
    5.
    发明申请
    Concurrent Refresh In Cache Memory 失效
    缓存中并发刷新

    公开(公告)号:US20110320700A1

    公开(公告)日:2011-12-29

    申请号:US12822364

    申请日:2010-06-24

    IPC分类号: G06F12/06

    CPC分类号: G06F12/0846 G06F12/0893

    摘要: Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller.

    摘要翻译: 高速缓冲存储器中的并发刷新包括计算集中式刷新控制器的刷新时间间隔,集中式刷新控制器对于高速缓存存储器的所有高速缓存存储体共同,将刷新时间间隔的开始时间发送到银行控制器,银行 控制器本身并且仅与高速缓冲存储器的一个高速缓冲存储器组相关联,并且对与表示控制器相关联的高速缓存存储器中的数据进行维护所需的刷新次数的连续刷新状态进行采样,请求在 处理高速缓冲存储器的流水线以便于所需的刷新次数,响应于请求接收刷新许可,并向编组控制器发送编码的刷新命令,编码的刷新命令指示授予高速缓冲存储器的刷新操作的次数 与银行控制人有关的银行。

    System and method for interrupt command queuing and ordering
    6.
    发明授权
    System and method for interrupt command queuing and ordering 有权
    用于中断命令排队和排序的系统和方法

    公开(公告)号:US06442634B2

    公开(公告)日:2002-08-27

    申请号:US09860309

    申请日:2001-05-18

    IPC分类号: G06F1300

    CPC分类号: G06F13/4027

    摘要: An input/output bus bridge and command queuing system includes an external interrupt router for receiving interrupt commands from bus unit controllers (BUCs) and responds with end of interrupt (EOI), interrupt return (INR) and interrupt reissue (IRR) commands. The interrupt router includes a first command queue for ordering EOI commands and a second command queue for ordering INR and IRR commands. A first in first out (FIFO) command queue orders bus memory mapped input output (MMIO) commands. The EOI commands are directed from the first command queue to the input of the FIFO command queue. The EOI commands and the MMIO commands are directed from the command queue to an input output bus and the INR and IRR commands are directed from the second command queue to the input output bus. In this way, strict ordering of EOI commands relative to MMIO accesses is maintained while simultaneously allowing INR and IRR commands to bypass enqueued MMIO accesses.

    摘要翻译: 输入/输出总线桥接器和命令排队系统包括外部中断路由器,用于从总线单元控制器(BUC)接收中断命令,并响应中断结束(EOI),中断返回(INR)和中断重发(IRR)命令。 中断路由器包括用于排序EOI命令的第一命令队列和用于排序INR和IRR命令的第二命令队列。 先进先出(FIFO)命令队列命令总线内存映射输入输出(MMIO)命令。 EOI命令从第一个命令队列指向FIFO命令队列的输入。 EOI命令和MMIO命令从命令队列引导到输入输出总线,INR和IRR命令从第二个命令队列引导到输入输出总线。 以这种方式,维护相对于MMIO访问的EOI命令的严格排序,同时允许INR和IRR命令绕过入队的MMIO访问。

    Dynamic cache correction mechanism to allow constant access to addressable index
    7.
    发明授权
    Dynamic cache correction mechanism to allow constant access to addressable index 失效
    动态高速缓存校正机制,允许持续访问可寻址索引

    公开(公告)号:US08719618B2

    公开(公告)日:2014-05-06

    申请号:US13495174

    申请日:2012-06-13

    IPC分类号: G06F11/00

    摘要: A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold.

    摘要翻译: 为缓存提供了一种技术。 高速缓存控制器访问同余类中的集合,并根据发现的错误确定该集合包含损坏的数据。 高速缓存控制器确定满足设置离线的删除参数,并确定同余类中当前离线集合的数量高于允许的离线号码阈值。 缓存控制器根据确定同余类中当前离线集合的数量高于允许的离线号码阈值,确定不采取脱机发生的集合。

    Optimizing EDRAM refresh rates in a high performance cache architecture
    9.
    发明授权
    Optimizing EDRAM refresh rates in a high performance cache architecture 失效
    在高性能缓存架构中优化EDRAM刷新率

    公开(公告)号:US08560767B2

    公开(公告)日:2013-10-15

    申请号:US13546687

    申请日:2012-07-11

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0897 G06F12/0855

    摘要: Embodiments relate to embedded Dynamic Random Access Memory (eDRAM) refresh rates in a high performance cache architecture. An aspect includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.

    摘要翻译: 实施例涉及高性能高速缓存架构中的嵌入式动态随机存取存储器(eDRAM)刷新率。 一个方面包括接收多个第一信号。 经由刷新请求者的刷新请求以包括包括第一信号的子集的间隔的第一刷新率被发送到高速缓冲存储器。 第一个刷新率对应于最大刷新率。 基于接收到第二信号,刷新计数器被复位。 刷新计数器在接收到多个刷新请求之后递增。 基于接收到第三信号,从刷新计数器向刷新请求者发送当前计数。 刷新请求以小于第一刷新率的第二刷新率发送。 基于从刷新计数器接收当前计数并确定当前计数大于刷新阈值来发送刷新请求。

    CENTRALIZED SERIALIZATION OF REQUESTS IN A MULTIPROCESSOR SYSTEM
    10.
    发明申请
    CENTRALIZED SERIALIZATION OF REQUESTS IN A MULTIPROCESSOR SYSTEM 失效
    多处理器系统中的要求的集中串行化

    公开(公告)号:US20110320778A1

    公开(公告)日:2011-12-29

    申请号:US12821933

    申请日:2010-06-23

    IPC分类号: G06F9/30

    CPC分类号: G06F9/526 G06F2209/522

    摘要: Serializing instructions in a multiprocessor system includes receiving a plurality of processor requests at a central point in the multiprocessor system. Each of the plurality of processor requests includes a needs register having a requestor needs switch and a resource needs switch. The method also includes establishing a tail switch indicating the presence of the plurality of processor requests at the central point, establishing a sequential order of the plurality of processor requests, and processing the plurality of processor requests at the central point in the sequential order.

    摘要翻译: 在多处理器系统中串行化指令包括在多处理器系统的中心点处接收多个处理器请求。 多个处理器请求中的每一个包括具有请求者需求切换和资源需要切换的需求寄存器。 该方法还包括建立指示在中心点存在多个处理器请求的尾部开关,建立多个处理器请求的顺序,以及按照顺序在中心点处理多个处理器请求。