INPUT OUTPUT BRIDGING
    31.
    发明申请
    INPUT OUTPUT BRIDGING 有权
    输入输出桥

    公开(公告)号:US20130103870A1

    公开(公告)日:2013-04-25

    申请号:US13280768

    申请日:2011-10-25

    IPC分类号: G06F13/368

    CPC分类号: G06F13/1605 G06F13/1684

    摘要: In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.

    摘要翻译: 在一个实施例中,系统包括存储器和用于与存储器进行处理器访问的第一桥接单元。 第一桥单元包括与输入输出总线耦合的第一仲裁单元,无存储器通知单元(“MFNU”)和存储器,并且被配置为从输入 - 输出总线接收请求并接收来自 MFNU并在第一条内存总线上选择发送到内存的请求。 所述系统还包括用于与所述存储器进行分组数据存取的第二桥单元,所述存储器包括与分组输入单元,分组输出单元和所述存储器耦合的第二仲裁单元,并且被配置为从所述分组输入单元接收请求, 从分组输出单元接收请求,并在第二存储器总线上选择发送到存储器的请求。

    Speculative directory writes in a directory based cache coherent nonuniform memory access protocol
    32.
    发明授权
    Speculative directory writes in a directory based cache coherent nonuniform memory access protocol 失效
    推测目录写入基于目录的缓存相干非均匀内存访问协议

    公开(公告)号:US07099913B1

    公开(公告)日:2006-08-29

    申请号:US09652834

    申请日:2000-08-31

    IPC分类号: G06F15/16 G06F12/00 G06F9/26

    摘要: A system and method is disclosed that reduces the latency of directory updates in a directory based Distributed Shared Memory computer system by speculating the next directory state. The distributed multiprocessing computer system contains a number of processor nodes each connected to main memory. Each main memory may store data that is shared between the processor nodes. A Home processor node for a memory block includes the original data block and a coherence directory for the data block in its main memory. An Owner processor node includes a copy of the original data block in its associated main memory, the copy of the data block residing exclusively in the main memory of the Owner processor node. A Requestor processor node may encounter a read or write miss of the original data block and request the data block from the Home processor node. The Home processor node receives the request for the data block from the Requestor processor node, forwards the request to the Owner processor node for the data block and performs a speculative write of the next directory state to the coherence directory for the data block without waiting for the Owner processor node to respond to the request.

    摘要翻译: 公开了一种系统和方法,其通过推测下一个目录状态来减少基于目录的分布式共享存储器计算机系统中的目录更新的延迟。 分布式多处理计算机系统包含多个处理器节点,每个处理器节点连接到主存储器。 每个主存储器可以存储在处理器节点之间共享的数据。 用于存储器块的家庭处理器节点包括其主存储器中的数据块的原始数据块和一致性目录。 所有者处理器节点包括在其相关联的主存储器中的原始数据块的副本,专用于所有者处理器节点的主存储器中的数据块的副本。 请求者处理器节点可能会遇到原始数据块的读或写错,并从家庭处理器节点请求数据块。 家庭处理器节点从请求者处理器节点接收对数据块的请求,将该请求转发给数据块的所有者处理器节点,并且将数据块的相干目录的下一个目录状态的推测性写入,而不等待 所有者处理器节点响应请求。

    Mechanism to control the allocation of an N-source shared buffer
    34.
    发明授权
    Mechanism to control the allocation of an N-source shared buffer 有权
    控制N源共享缓冲区分配的机制

    公开(公告)号:US07213087B1

    公开(公告)日:2007-05-01

    申请号:US09651924

    申请日:2000-08-31

    IPC分类号: G06F5/00

    CPC分类号: H04L47/39 H04L49/90

    摘要: A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.

    摘要翻译: 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。

    Input output bridging
    35.
    发明授权
    Input output bridging 有权
    输入输出桥接

    公开(公告)号:US08473658B2

    公开(公告)日:2013-06-25

    申请号:US13280768

    申请日:2011-10-25

    IPC分类号: G06F13/00

    CPC分类号: G06F13/1605 G06F13/1684

    摘要: In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.

    摘要翻译: 在一个实施例中,系统包括存储器和用于与存储器进行处理器访问的第一桥接单元。 第一桥单元包括与输入输出总线耦合的第一仲裁单元,无存储器通知单元(“MFNU”)和存储器,并且被配置为从输入 - 输出总线接收请求并接收来自 MFNU并在第一条内存总线上选择发送到内存的请求。 所述系统还包括用于与所述存储器进行分组数据存取的第二桥单元,所述存储器包括与分组输入单元,分组输出单元和所述存储器耦合的第二仲裁单元,并且被配置为从所述分组输入单元接收请求, 从分组输出单元接收请求,并在第二存储器总线上选择发送到存储器的请求。

    Packet priority in a network processor
    36.
    发明授权
    Packet priority in a network processor 有权
    网络处理器中的包优先级

    公开(公告)号:US08885480B2

    公开(公告)日:2014-11-11

    申请号:US13277613

    申请日:2011-10-20

    摘要: In a network processor, a “port-kind” identifier (ID) is assigned to each port. Parsing circuitry employs the port-kind ID to select the configuration information associate with a received packet. The port kind ID can also be stored at a data structure presented to software, along with a larger port number (indicating an interface and/or channel). Based on the port kind ID and extracted information about the packet, a backpressure ID is calculated for the packet. The backpressure ID is implemented to assign a priority to the packet, as well as determine whether a traffic threshold is exceeded, thereby enabling a backpressure signal to limit packet traffic associated with the particular backpressure ID.

    摘要翻译: 在网络处理器中,为每个端口分配“端口类型”标识符(ID)。 解析电路使用端口种类ID来选择与接收到的分组相关联的配置信息。 端口种类ID也可以存储在呈现给软件的数据结构以及更大的端口号(指示接口和/或信道)上。 根据端口种类ID和关于分组的提取信息,计算分组的背压ID。 背压ID被实现以分配分组的优先级,并且确定是否超过流量阈值,从而使背压信号能够限制与特定背压ID相关联的分组流量。

    Apparatus and method for data deskew
    37.
    发明授权
    Apparatus and method for data deskew 有权
    数据去偏移的装置和方法

    公开(公告)号:US07209531B1

    公开(公告)日:2007-04-24

    申请号:US10397083

    申请日:2003-03-26

    IPC分类号: H04L7/00

    摘要: A deskew circuit utilizing a coarse delay adjustment and fine delay adjustment centers the received data in a proper data window and aligns the data for proper sampling. In one scheme, bit state transitions of a training sequence for SPI-4 protocol is used to adjust delays to align the transition points.

    摘要翻译: 利用粗略延迟调整和精细延迟调整的偏移电路将接收到的数据集中在适当的数据窗口中,并对准数据以进行适当的采样。 在一种方案中,SPI-4协议的训练序列的位状态转换用于调整延迟以对齐转换点。

    PACKET PRIORITY IN A NETWORK PROCESSOR
    38.
    发明申请
    PACKET PRIORITY IN A NETWORK PROCESSOR 有权
    分组优先在网络处理器

    公开(公告)号:US20130100812A1

    公开(公告)日:2013-04-25

    申请号:US13277613

    申请日:2011-10-20

    IPC分类号: H04L12/26

    摘要: In a network processor, a “port-kind” identifier (ID) is assigned to each port. Parsing circuitry employs the port-kind ID to select the configuration information associate with a received packet. The port kind ID can also be stored at a data structure presented to software, along with a larger port number (indicating an interface and/or channel). Based on the port kind ID and extracted information about the packet, a backpressure ID is calculated for the packet. The backpressure ID is implemented to assign a priority to the packet, as well as determine whether a traffic threshold is exceeded, thereby enabling a backpressure signal to limit packet traffic associated with the particular backpressure ID.

    摘要翻译: 在网络处理器中,为每个端口分配“端口类型”标识符(ID)。 解析电路使用端口种类ID来选择与接收到的分组相关联的配置信息。 端口种类ID也可以存储在呈现给软件的数据结构以及更大的端口号(指示接口和/或信道)上。 根据端口种类ID和关于分组的提取信息,计算分组的背压ID。 背压ID被实现以分配分组的优先级,并且确定是否超过流量阈值,从而使背压信号能够限制与特定背压ID相关联的分组流量。

    Packet traffic control in a network processor

    公开(公告)号:US09906468B2

    公开(公告)日:2018-02-27

    申请号:US13283252

    申请日:2011-10-27

    IPC分类号: H04L12/933 H04L12/873

    CPC分类号: H04L49/15 H04L47/52

    摘要: A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.

    Scalable efficient I/O port protocol
    40.
    发明授权
    Scalable efficient I/O port protocol 有权
    可扩展的高效I / O端口协议

    公开(公告)号:US08364851B2

    公开(公告)日:2013-01-29

    申请号:US10677583

    申请日:2003-10-02

    IPC分类号: G06F3/00

    摘要: A system that supports a high performance, scalable, and efficient I/O port protocol to connect to I/O devices is disclosed. A distributed multiprocessing computer system contains a number of processors each coupled to an I/O bridge ASIC implementing the I/O port protocol. One or more I/O devices are coupled to the I/O bridge ASIC, each I/O device capable of accessing machine resources in the computer system by transmitting and receiving message packets. Machine resources in the computer system include data blocks, registers and interrupt queues. Each processor in the computer system is coupled to a memory module capable of storing data blocks shared between the processors. Coherence of the shared data blocks in this shared memory system is maintained using a directory based coherence protocol. Coherence of data blocks transferred during I/O device read and write accesses is maintained using the same coherence protocol as for the memory system. Data blocks transferred during an I/O device read or write access may be buffered in a cache by the I/O bridge ASIC only if the I/O bridge ASIC has exclusive copies of the data blocks. The I/O bridge ASIC includes a DMA device that supports both in-order and out-of-order DMA read and write streams of data blocks. An in-order stream of reads of data blocks performed by the DMA device always results in the DMA device receiving coherent data blocks that do not have to be written back to the memory module.

    摘要翻译: 公开了一种支持高性能,可扩展和高效的I / O端口协议来连接到I / O设备的系统。 分布式多处理计算机系统包含多个处理器,每个处理器都耦合到实现I / O端口协议的I / O桥ASIC。 一个或多个I / O设备耦合到I / O桥ASIC,每个I / O设备能够通过发送和接收消息分组来访问计算机系统中的机器资源。 计算机系统中的机器资源包括数据块,寄存器和中断队列。 计算机系统中的每个处理器耦合到能够存储处理器之间共享的数据块的存储器模块。 使用基于目录的一致性协议来维护该共享存储器系统中的共享数据块的一致性。 使用与存储系统相同的一致性协议来维护I / O设备读写访问期间传输的数据块的一致性。 只有当I / O桥ASIC具有数据块的排他副本时,I / O桥ASIC才能缓存在I / O设备读或写访问期间传输的数据块。 I / O桥ASIC包括支持数据块的顺序和无序DMA读和写数据流的DMA设备。 由DMA设备执行的数据块的顺序读取流总是导致DMA设备接收不必写入存储器模块的相干数据块。