Abstract:
Systems and methods for providing RDMA (remote direct memory access) read requests as a restricted feature in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes, including a plurality of virtual machine. The method can associate a host channel adapter with a selective RDMA restriction. The method can host a virtual machine of the plurality of virtual machines at the host channel adapter that comprises a selective RDMA restriction.
Abstract:
Systems and methods for using InfiniBand routing algorithms for Ethernet fabrics in a high performance computing environment. The method can provide, at a computer comprising one or more microprocessors, a plurality of switches, a plurality of hosts, a topology provider (TP) module, a routing engine (RE) module, and a switch initializer (SI) module. The method can perform a discovery sweep, by the TP, of the plurality of hosts and the plurality of switches and assigns an address to each of the plurality of hosts and the plurality of switches. The method can calculate, by the routing engine, a routing map, based upon a routing scheme, for the plurality of hosts and the plurality of switches, the routing map comprising a plurality of forwarding tables. The method can configure, each of the plurality of switches with a forwarding table of the plurality of forwarding tables calculated by the routing engine.
Abstract:
A system and method can support efficient packet processing in a network environment. The system can comprise a direct memory access (DMA) resources pool that comprises one or more of DMA resources. Furthermore, the system can use a plurality of packet buffers in a memory, wherein each said DMA resource can point to a chain of packet buffers in the memory. Here, the chain of packet buffers can be implemented based on either a linked list data structure and/or a linear array data structure. Additionally, each said DMA resource allows a packet processing thread to access the chain of packet buffers using a pre-assigned thread key.
Abstract:
A system and method can support efficient packet processing in a network environment. The system can comprise a thread scheduling engine that operates to assign a thread key to each software thread in a plurality of software threads. Furthermore, the system can comprise a pool of direct memory access (DMA) resources that can be used to process packets in the network environment. Additionally, each said software thread operates to request access to a DMA resource in the pool of DMA resources by presenting an assigned thread key, and a single software thread is allowed to access multiple DMA resources using the same thread key.
Abstract:
A system and method can support efficient packet processing in a network environment. The system can comprise a thread scheduling engine that operates to assign a thread key to each software thread in a plurality of software threads. Furthermore, the system can comprise a pool of direct memory access (DMA) resources that can be used to process packets in the network environment. Additionally, each said software thread operates to request access to a DMA resource in the pool of DMA resources by presenting an assigned thread key, and a single software thread is allowed to access multiple DMA resources using the same thread key.