摘要:
A communication protocol in a layer two (L2) network switch comprises, in response to a service request by a source node, registering the source node for packet communication service. The protocol further comprises forwarding one or more packets from the registered source node to one or more destination nodes. The protocol further comprises receiving packets from one or more destination nodes and forwarding each received packet to a corresponding registered node.
摘要:
A switching network has a plurality of switches including at least a switch and a managing master switch. At the managing master switch, a first capability vector (CV) is received from the switch. The managing master switch determines whether the first CV is compatible with at least a second CV in a network membership data structure that records CVs of multiple switches in the switching network. In response to detecting an incompatibility, the managing master switch initiates an image update to an image of the switch. In response to a failure of the image update at the switch, the switch boots utilizing a mini-DC module that reestablishes communication between the switch with the managing master switch and retries the image update.
摘要:
A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock.
摘要:
A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock.
摘要:
A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal the first thread releases the global lock.
摘要:
A communication protocol in a layer two (L2) network switch comprises, in response to a service request by a source node, registering the source node for packet communication service. The protocol further comprises forwarding one or more packets from the registered source node to one or more destination nodes. The protocol further comprises receiving packets from one or more destination nodes and forwarding each received packet to a corresponding registered node.
摘要:
A distributed fabric system has distributed line card (DLC) chassis and scaled-out fabric coupler (SFC) chassis. Each DLC chassis includes a network processor and fabric ports. Each network processor of each DLC chassis includes a fabric interface in communication with the DLC fabric ports of that DLC chassis. Each SFC chassis includes a fabric element and fabric ports. A communication link connects each SFC fabric port to one DLC fabric port. Each communication link includes cell-carrying lanes. Each fabric element of each SFC chassis collects per-lane statistics for each SFC fabric port of that SFC chassis. Each SFC chassis includes program code that obtains the per-lane statistics collected by the fabric element chip of that SFC chassis. A network element includes program code that gathers the per-lane statistics collected by each fabric element of each SFC chassis and integrates the statistics into a topology of the entire distributed fabric system.
摘要:
A distributed fabric system has distributed line card (DLC) chassis and scaled-out fabric coupler (SFC) chassis. Each DLC chassis includes a network processor and fabric ports. Each network processor of each DLC chassis includes a fabric interface in communication with the DLC fabric ports of that DLC chassis. Each SFC chassis includes a fabric element and fabric ports. A communication link connects each SFC fabric port to one DLC fabric port. Each communication link includes cell-carrying lanes. Each fabric element of each SFC chassis collects per-lane statistics for each SFC fabric port of that SFC chassis. Each SFC chassis includes program code that obtains the per-lane statistics collected by the fabric element chip of that SFC chassis. A network element includes program code that gathers the per-lane statistics collected by each fabric element of each SFC chassis and integrates the statistics into a topology of the entire distributed fabric system.
摘要:
In a switching network, each of a plurality of lower tier entities is coupled to each of multiple master switches at an upper tier by a respective one of multiple links. At each of the multiple master switches, a plurality of virtual ports each corresponding to a respective one of a plurality of remote physical interfaces (RPIs) at the lower tier are implemented on each of a plurality of ports. Each of the plurality of lower tier entities implements a respective egress port mapping indicating which of its plurality of RPIs transmits egress data traffic through each of its multiple links to the multiple master switches. In response to failure of one of the multiple links coupling a particular lower tier entity to a particular master switch, the particular lower tier entity updates its egress port mapping to redirect egress data traffic to another of the multiple master switches without packet dropping.
摘要:
A switching network includes an upper tier and a lower tier including a plurality of lower tier entities. A master switch in the upper tier, which has a plurality of ports each coupled to a respective lower tier entity, implements on each of the ports a plurality of virtual ports each corresponding to a respective one of a plurality of remote physical interfaces (RPIs) at the lower tier entity coupled to that port. Data traffic communicated between the master switch and RPIs is queued within virtual ports that correspond to the RPIs on lower tier entities with which the data traffic is communicated. The master switch enforces priority-based flow control (PFC) on data traffic of a given virtual port by transmitting, to a lower tier entity on which a corresponding RPI resides, a PFC data frame specifying priorities for at least two different classes of data traffic communicated by the particular RPI.