摘要:
In some embodiments, an apparatus comprises a core network node and a control module within an enterprise network architecture. The core network node is configured to be operatively coupled to a set of wired network nodes and a set of wireless network nodes. The core network node is configured to receive a first tunneled packet associated with a first session from a wired network node from the set of wired network nodes. The core network node is configured to also receive a second tunneled packet associated with a second session from a wireless network node from the set of wireless network nodes through intervening wired network nodes from the set of wired network nodes. The control module is operatively coupled to the core network node. The control module is configured to manage the first session and the second session.
摘要:
In some embodiments, an apparatus comprises a core network node configured to be operatively coupled to a set of network nodes. The core network node is configured to define configuration information for a network node from the set of network nodes based on a template, where the configuration information excludes virtual local area network (VLAN) information or IP subnet information. The core network node is further configured to send the configuration information to the network node.
摘要:
A system that processes single stream multicast data includes multiple queues, a dequeue engine, and/or a queue control engine. The queues temporarily store data. At least one of the queues stores single stream multicast data. A multicast count is associated with the single stream multicast data and corresponds to a number of destinations to which the single stream multicast data is to be sent. The dequeue engine dequeues data from the queues. If the data corresponds to the single stream multicast data, the dequeue engine examines the multicast count associated with the single stream multicast data and dequeues the single stream multicast data based on the multicast count. The queue control engine examines one of the queues to determine whether to drop data from the queue and marks the data based on a result of the determination.
摘要:
A system provides congestion control and includes multiple queues that temporarily store data and a drop engine. The system associates a value with each of the queues, where each of the values relates to an amount of memory associated with the queue. The drop engine compares the value associated with a particular one of the queues to one or more programmable thresholds and selectively performs explicit congestion notification or packet dropping on data in the particular queue based on a result of the comparison.
摘要:
A system determines bandwidth use by queues in a network device. To do this, the system determines an instantaneous amount of bandwidth used by each of the queues and an average amount of bandwidth used by each of the queues. The system then identifies bandwidth use by each of the queues based on the instantaneous bandwidth used and the average bandwidth used by each of the queues.
摘要:
A host includes a bus cache, a L1 cache and an enhanced snoop logic circuit to increase bandwidth of peripheral bus during a memory access transaction. When a device connected to the peripheral bus starts a memory read transaction, the host converts the virtual address of the memory read transaction to a physical address. The snoop logic circuit checks to see whether the physical address is in the bus cache and, if so, whether the data in the bus cache corresponding to address is valid. If there is a bus cache hit, the corresponding data is accessed from the bus cache and output onto the peripheral bus. However, if the snoop logic circuit does not find the physical address in the bus cache or finds that the data is invalid, the snoop logic circuit causes (1) the peripheral bus interface unit to perform a retry operation on the peripheral bus and (2) the cache controller to process a memory request to retrieve the requested data from the L1 cache, L2 cache (if any) or the main memory and store the requested data into the bus cache. In addition, when the device retries the memory read request, the bus cache will have the requested data so that the data can be immediately provided to the peripheral bus. Thus, in memory read transactions longer than a cache line, the data is provided on the peripheral bus in a pseudo-packet switched manner.
摘要:
A system provides congestion control and includes multiple queues that temporarily store data and a drop engine. The system associates a value with each of the queues, where each of the values relates to an amount of memory associated with the queue. The drop engine compares the value associated with a particular one of the queues to one or more programmable thresholds and selectively performs explicit congestion notification or packet dropping on data in the particular queue based on a result of the comparison.
摘要:
A system provides congestion control and includes multiple queues that temporarily store data and a drop engine. The system associates a value with each of the queues, where each of the values relates to an amount of memory associated with the queue. The drop engine compares the value associated with a particular one of the queues to one or more programmable thresholds and selectively performs explicit congestion notification or packet dropping on data in the particular queue based on a result of the comparison.
摘要:
In some embodiments, an apparatus includes a network node operatively coupled within a network. The network node is configured to send a first authentication message upon boot up, and receive, in response to the first authentication message, a second authentication message configured to be used to authenticate the network node. The network node is configured to send a first discovery message, and receive, based on the first discovery message, a second discovery message configured to be used by the network node to identify an address of the network node and an address of a core network node within the network. The network node is configured to set up a control-plane tunnel to the core network node based on the address of the network node and the address for the core network node and receive configuration information from the core network node through the control-plane tunnel.
摘要:
A system selectively drops data from a queue. The system includes queues that temporarily store data, a dequeue engine that dequeues data from the queues, and a drop engine that operates independently from the dequeue engine. The drop engine selects one of the queues to examine, determines whether to drop data from a head of the examined queue, and marks the data based on a result of the determination.