Abstract:
In one embodiment, a method includes receiving a key associated with a portion of a data packet, comparing the key to a first range extreme, selecting a second range extreme, and comparing the key with the second range. The first range extreme is associated with a first range and the second range is associated with a second range. The second range is selected based on the comparing the key to the first range extreme. The method includes producing a policy vector associated with the first or second range.
Abstract:
In one embodiment, an apparatus comprises a range selection module, a first stage of bloom filters, a second stage of bloom filters and a hashing module. The range selection module is configured to define a set of hash key vectors based on a set of range values associated with at least a portion of an address value from a data packet received at a multi-stage switch. The first stage of bloom filters and the second stage of bloom filters are collectively configured to determine that at least a portion of a hash key vector from the set of hash key vectors has a probability of being included in a hash table. The hashing module is configured to produce a hash value based on the hash key vector such that a first policy vector is selected based on the hash value and the first policy vector is decompressed to produce a second policy vector associated with the data packet.
Abstract:
In one embodiment, a method includes accessing a condition test vector, selecting a key from a plurality of keys, and determining whether the key selected and a condition value satisfy a condition relation. The accessing being based on an index value. The condition test vector including a first plurality of bit values defining the condition relation, a second plurality of bit values defining a key selector, and a third plurality of bit values defining the condition value. The selecting being based on the second plurality of bit values. Each key from the plurality of keys including a combination of bit values representing a portion of a data packet. A result is defined based on the determining.
Abstract:
In one embodiment, a method, comprising producing a first policy vector based on a first portion of a data packet received at a multi-stage switch. The method also includes producing a second policy vector based on a second portion of the data packet different than the first portion of the data packet. A third policy vector is produced based on a combination of at least the first policy vector and at least the second policy vector. The third policy vector including a combination of bit values configured to trigger an element at the multi-stage switch to process the data packet.
Abstract:
In one embodiment, a method includes receiving a key associated with a portion of a data packet, comparing the key to a first range extreme, selecting a second range extreme, and comparing the key with the second range. The first range extreme is associated with a first range and the second range is associated with a second range. The second range is selected based on the comparing the key to the first range extreme. The method includes producing a policy vector associated with the first or second range.
Abstract:
A network content service apparatus includes a set of compute elements adapted to perform a set of network services; and a switching fabric coupling compute elements in said set of compute elements. The set of network services includes firewall protection, Network Address Translation, Internet Protocol forwarding, bandwidth management, Secure Sockets Layer operations, Web caching, Web switching, and virtual private networking. Code operable on the compute elements enables the network services, and the compute elements are provided on blades which further include at least one input/output port.
Abstract:
A system and method allocate memory by a network processor system in an off-chip DRAM. Upon initiation, an on-chip DRAM controller module creates a software structure that allocates blocks of memory locations in the DRAM as packet memory blocks. As a CPU, input/output module, and intrusion detection circuit read and write packets from the DRAM across a common bus, the DRAM controller module facilitates the rapid flow of packets in and out of the DRAM. FreeLists of packet buffer blocks are maintained by both the DRAM controller and the CPU for quick access in directing the flow of packets to available packet buffer blocks.
Abstract:
A compute engine's central processing unit is coupled to a coprocessor that includes application engines. The central processing unit initializes the coprocessor to perform an application, and the coprocessor initializes an application engine to perform the application. The application engine responds by carrying out the application. In performing some applications, the application engine accesses cache memory—obtaining a physical memory address that corresponds to a virtual address and providing the physical address to the cache memory. In some instances, the coprocessor employs multiple application engines to carry out an application. In one implementation, the application engines facilitate different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
Abstract:
A system includes a plurality of processing clusters and a snoop controller adapted to service memory requests. The snoop controller and each processing cluster are coupled to a snoop ring. A first processing cluster forwards a memory request to the snoop controller for access to a memory location. In response to the memory request, the snoop controller places a snoop request on the snoop ring—calling for a change in ownership of the requested memory location. A second processing cluster receives the snoop request on the snoop ring. The second processing cluster generates a response to the snoop request. If the second processing cluster owns the requested memory location, the second processing cluster modifies ownership status of the requested memory location.
Abstract:
A method and apparatus for dynamically reconfiguring a processor involves placing the processor in a first configuration having a first number (m) of strands while the coded instructions comprise instructions from a number (m) threads. The instructions in each of the m threads are executed on one of the m strands using execution resources at least some of which are shared among the m strands. While the coded instructions comprise instructions from a number (n) threads, the processor is placed in a second configuration having a second number (n) of strands. The instruction are executed in each of the n strands using execution resources at least some of which are shared among the n strands.