摘要:
A content addressable memory (CAM) includes a linked list structure for a pending queue to order memory commands for maximizing memory channel bandwidth by minimizing read/write stalls due to read-modify-write commands.
摘要:
A wall mounted rail system for providing additional air flow and electrical outlets to a room. The system includes a first rail removably securable to a second rail, wherein each rail includes a front surface and a rear surface. The rear surface can secure to a wall and a decorative molding, such as chair rail molding, is secured to the front surface. A plurality of ports are disposed on a bottom face of the front surface and can supply power to a device connected to each port. Further, each port is electrically connected to one another. Tubing extends through an interior volume of each rail and plurality of apertures are disposed on a top face of the front surface thereof. Each aperture is in fluid communication with the tubing to allow air to flow from the tubing, through the apertures, and into the room.
摘要:
Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
摘要:
Apparatuses and methods to perform pattern matching are presented. In one embodiment, an apparatus comprises a memory to store a first pattern table comprising information indicative of whether a byte of input data matches a pattern and whether to ignore other matches of the pattern occur in remaining bytes of the input data. The apparatus further comprises one-byte match logic coupled to the memory, to determine, based on the information in the first pattern table, a one-byte match event with respect to the input data. The apparatus further comprises a control unit to filter the other matches of the pattern based on the information of the first pattern table.
摘要:
A method and apparatus for scheduling packets using a pre-sort scheduling array having one or more smoothing registers. The scheduling array includes a number of round buffers, each round buffer having an associated smoothing register. To schedule a packet for transmission, the packet's transmission round and relative position within that round are determined, and an identifier for the packet is placed at the appropriate position within the scheduling array. A bit of the associated smoothing register is set, the set bit corresponding to the entry receiving the packet identifier. During transmission, the set bits of the smoothing register associated with a current round buffer are read to identify packets that are to be dequeued.
摘要:
Method and apparatus to enable slower memory, such as dynamic random access memory (DRAM)-based memory, to support low-latency access using vertical caching. Related function metadata used for packet-processing functions, including metering and flow statistics, is stored in an external DRAM-based store. In one embodiment, the DRAM comprises double data-rate (DDR) DRAM. A network processor architecture is disclosed including a DDR assist with data cache coupled to a DRAM controller. The architecture further includes multiple compute engines used to execute various packet-processing functions. One such function is a DDR assist function that is used to pre-fetch a set of function metadata for a current packet and store the function metadata in the data cache. Subsequently, one or more packet-processing functions may operate on the function metadata by accessing it from the cache. After the functions are completed, the function metadata are written back to the DRAM-based store. The scheme provides similar performance to SRAM-based schemes, but uses much cheaper DRAM-type memory.
摘要:
In general, in one aspect, the disclosure describes a method that includes at a first packet processing thread executing at a first core, performing a memory read to data shared between packet processing threads including the first thread. The method also includes at the first packet processing thread, determining whether the data returned by the memory read has been changed by a packet processing thread operating on another core before performing an exclusive operation on the shared data by the first packet processing thread.
摘要:
Systems and methods for dynamically changing ring size in network processing are disclosed. In one embodiment, a method generally includes requesting a free memory block from a free block pool manager by a ring manager for a corresponding ring when a first memory block is filled, receiving an address of a free memory block from the free block pool manager in response to the request from the ring manager, storing the address of the free memory block in the first memory block by the ring manager, the storing linking the free memory block to the first memory block as a next linked memory block to the first memory block, and repeating the requesting, receiving and storing for each additional linked memory blocks. An external service thread may be assigned to fulfill block fill-up requests from the free block pool manager.