Input output bridging
    71.
    发明授权
    Input output bridging 有权
    输入输出桥接

    公开(公告)号:US08473658B2

    公开(公告)日:2013-06-25

    申请号:US13280768

    申请日:2011-10-25

    IPC分类号: G06F13/00

    CPC分类号: G06F13/1605 G06F13/1684

    摘要: In one embodiment, a system comprises a memory, and a first bridge unit for processor access with the memory. The first bridge unit comprises a first arbitration unit that is coupled with an input-output bus, a memory free notification unit (“MFNU”), and the memory, and is configured to receive requests from the input-output bus and receive requests from the MFNU and choose among the requests to send to the memory on a first memory bus. The system further comprises a second bridge unit for packet data access with the memory that includes a second arbitration unit that is coupled with a packet input unit, a packet output unit, and the memory and is configured to receive requests from the packet input unit and receive requests from the packet output unit, and choose among the requests to send to the memory on a second memory bus.

    摘要翻译: 在一个实施例中,系统包括存储器和用于与存储器进行处理器访问的第一桥接单元。 第一桥单元包括与输入输出总线耦合的第一仲裁单元,无存储器通知单元(“MFNU”)和存储器,并且被配置为从输入 - 输出总线接收请求并接收来自 MFNU并在第一条内存总线上选择发送到内存的请求。 所述系统还包括用于与所述存储器进行分组数据存取的第二桥单元,所述存储器包括与分组输入单元,分组输出单元和所述存储器耦合的第二仲裁单元,并且被配置为从所述分组输入单元接收请求, 从分组输出单元接收请求,并在第二存储器总线上选择发送到存储器的请求。

    MULTI-CORE INTERCONNECT IN A NETWORK PROCESSOR
    72.
    发明申请
    MULTI-CORE INTERCONNECT IN A NETWORK PROCESSOR 有权
    网络处理器中的多核连接

    公开(公告)号:US20130111141A1

    公开(公告)日:2013-05-02

    申请号:US13285629

    申请日:2011-10-31

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0813 G06F12/08

    摘要: A network processor includes multiple processor cores for processing packet data. In order to provide the processor cores with access to a memory subsystem, an interconnect circuit directs communications between the processor cores and the L2 Cache and other memory devices. The processor cores are divided into several groups, each group sharing an individual bus, and the L2 Cache is divided into a number of banks, each bank having access to a separate bus. The interconnect circuit processes requests to store and retrieve data from the processor cores across multiple buses, and processes responses to return data from the cache banks. As a result, the network processor provides high-bandwidth memory access for multiple processor cores.

    摘要翻译: 网络处理器包括用于处理分组数据的多个处理器核。 为了向处理器核提供对存储器子系统的访问,互连电路指导处理器核与L2 Cache和其他存储器件之间的通信。 处理器核心分为几组,每组共享一条单独的总线,二级缓存分为多个银行,每个银行都可以访问单独的总线。 互连电路处理在多个总线上存储和检索来自处理器核心的数据的请求,并处理从缓存存储器返回数据的响应。 因此,网络处理器为多个处理器内核提供高带宽存储器访问。

    PACKET TRAFFIC CONTROL IN A NETWORK PROCESSOR
    73.
    发明申请
    PACKET TRAFFIC CONTROL IN A NETWORK PROCESSOR 有权
    网络处理器中的分组交通控制

    公开(公告)号:US20130107711A1

    公开(公告)日:2013-05-02

    申请号:US13283252

    申请日:2011-10-27

    IPC分类号: H04L12/26

    CPC分类号: H04L49/15 H04L47/52

    摘要: A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.

    摘要翻译: 网络处理器通过维持未决分组的计数来控制网络中的分组流量。 在网络处理器中,将分组输出连接到接收这些分组的各个网络接口的多个路径中的每一个分配给管道标识符(ID)。 相应的管道ID在发送时附加到每个数据包。 计数器采用管道ID来维护网络接口要发送的数据包的计数。 因此,网络处理器在每个管道ID的基础上管理流量,以确保不超过流量阈值。

    PACKET PRIORITY IN A NETWORK PROCESSOR
    74.
    发明申请
    PACKET PRIORITY IN A NETWORK PROCESSOR 有权
    分组优先在网络处理器

    公开(公告)号:US20130100812A1

    公开(公告)日:2013-04-25

    申请号:US13277613

    申请日:2011-10-20

    IPC分类号: H04L12/26

    摘要: In a network processor, a “port-kind” identifier (ID) is assigned to each port. Parsing circuitry employs the port-kind ID to select the configuration information associate with a received packet. The port kind ID can also be stored at a data structure presented to software, along with a larger port number (indicating an interface and/or channel). Based on the port kind ID and extracted information about the packet, a backpressure ID is calculated for the packet. The backpressure ID is implemented to assign a priority to the packet, as well as determine whether a traffic threshold is exceeded, thereby enabling a backpressure signal to limit packet traffic associated with the particular backpressure ID.

    摘要翻译: 在网络处理器中,为每个端口分配“端口类型”标识符(ID)。 解析电路使用端口种类ID来选择与接收到的分组相关联的配置信息。 端口种类ID也可以存储在呈现给软件的数据结构以及更大的端口号(指示接口和/或信道)上。 根据端口种类ID和关于分组的提取信息,计算分组的背压ID。 背压ID被实现以分配分组的优先级,并且确定是否超过流量阈值,从而使背压信号能够限制与特定背压ID相关联的分组流量。

    Mechanism to control the allocation of an N-source shared buffer
    77.
    发明授权
    Mechanism to control the allocation of an N-source shared buffer 有权
    控制N源共享缓冲区分配的机制

    公开(公告)号:US07213087B1

    公开(公告)日:2007-05-01

    申请号:US09651924

    申请日:2000-08-31

    IPC分类号: G06F5/00

    CPC分类号: H04L47/39 H04L49/90

    摘要: A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.

    摘要翻译: 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。

    Fault containment and error recovery in a scalable multiprocessor
    78.
    发明授权
    Fault containment and error recovery in a scalable multiprocessor 有权
    可扩展多处理器中的故障控制和错误恢复

    公开(公告)号:US07152191B2

    公开(公告)日:2006-12-19

    申请号:US10691744

    申请日:2003-10-23

    IPC分类号: G06F11/00

    摘要: A multi-processor computer system permits various types of partitions to be implemented to contain and isolate hardware failures. The various types of partitions include hard, semi-hard, firm, and soft partitions. Each partition can include one or more processors. Upon detecting a failure associated with a processor, the connection to adjacent processors in the system can be severed, thereby precluding corrupted data from contaminating the rest of the system. If an inter-processor connection is severed, message traffic in the system can become congested as messages become backed up in other processors. Accordingly, each processor includes various timers to monitor for traffic congestion that may be due to a severed connection. Rather than letting the processor continue to wait to be able to transmit its messages, the timers will expire at preprogrammed time periods and the processor will take appropriate action, such as simply dropping queued messages, to keep the system from locking up.

    摘要翻译: 多处理器计算机系统允许实现各种类型的分区以包含和隔离硬件故障。 各种类型的分区包括硬,半硬,坚固和软分区。 每个分区可以包括一个或多个处理器。 当检测到与处理器相关联的故障时,可以切断与系统中的相邻处理器的连接,从而防止损坏的数据污染系统的其余部分。 如果处理器间连接被切断,则在其他处理器中的消息备份时,系统中的消息流量可能会变得拥塞。 因此,每个处理器包括各种定时器,以监视可能由于切断的连接造成的交通拥堵。 而不是让处理器继续等待能够发送其消息,定时器将在预编程的时间段过期,并且处理器将采取适当的动作,例如简单地删除排队的消息,以防止系统锁定。

    Method and apparatus for managing timestamps when storing data

    公开(公告)号:US07031869B2

    公开(公告)日:2006-04-18

    申请号:US10034462

    申请日:2001-12-28

    IPC分类号: G06F19/00

    摘要: A system is disclosed in which an on-chip logic analyzer (OCLA) includes timestamp logic capable of providing clock cycle resolution of data entries using a relatively small number of bits. The timestamp logic includes a counter that is reset each time a store operation occurs. The counter counts the number of clock cycles since the previous store operation, and if enabled by the user, provides a binary signal to the memory that indicates the number of clock cycles since the previous store operation, which the memory stores with the state data. If the counter overflows before a store operation is requested, the timestamp logic may force a store operation so that the time between stores can be determined.

    Priority rules for reducing network message routing latency
    80.
    发明授权
    Priority rules for reducing network message routing latency 失效
    降低网络消息路由延迟的优先级规则

    公开(公告)号:US06961781B1

    公开(公告)日:2005-11-01

    申请号:US09652322

    申请日:2000-08-31

    摘要: A system and method is disclosed for reducing network message passing latency in a distributed multiprocessing computer system that contains a plurality of microprocessors in a computer network, each microprocessor including router logic to route message packets prioritized in importance by the type of message packet, age of the message packet, and the source of the message packet. The microprocessors each include a plurality of network input ports connected to corresponding local arbiters in the router. The local arbiters are each able to select a message packet from the message packets waiting at the associated network input port. Microprocessor input ports and microprocessor output ports in the microprocessor allow the exchange of message packets between hardware functional units in the microprocessor and between the microprocessors. The microprocessor input ports are similarly each coupled to corresponding local arbiters in the router. Each of the local arbiters is able to select a message packet among the message packets waiting at the microprocessor input port. Global arbiters in the router connected to the network output ports and microprocessor output ports select a message packet from message packets nominated by the local arbiters of the network input ports and microprocessor input ports. The local arbiters connected to each network input port or microprocessor input port will request service from a output port global arbiter for a message packet based on the message packet type if the message packet is ready to be dispatched.

    摘要翻译: 公开了一种用于减少在计算机网络中包含多个微处理器的分布式多处理计算机系统中的网络消息传递延迟的系统和方法,每个微处理器包括路由器逻辑,用于路由消息分组的重要性优先于消息分组的类型, 消息包和消息包的来源。 微处理器各自包括连接到路由器中对应的本地仲裁器的多个网络输入端口。 本地仲裁器能够从相关联的网络输入端口等待的消息分组中选择一个消息包。 微处理器输入端口和微处理器输出端口允许在微处理器中的硬件功能单元和微处理器之间交换消息包。 微处理器输入端口类似地分别耦合到路由器中的对应的本地仲裁器。 每个本地仲裁器能够在等待在微处理器输入端口的消息包中选择一个消息包。 连接到网络输出端口和微处理器输出端口的路由器中的全局仲裁器从由网络输入端口和微处理器输入端口的本地仲裁器指定的消息分组中选择消息分组。 连接到每个网络输入端口或微处理器输入端口的本地仲裁器将根据消息分组类型从消息分组的输出端口全局仲裁器请求服务,如果消息分组准备好被分派。