Abstract:
Embodiments of the present disclosure provide a network congestion control method, where the method includes: sending, by a transmit end device, forward probe packets to a destination device at a preset initialized rate, where some or all of the forward probe packets arrive at the destination device after being rate-limited forwarded by using a reserved bandwidth of an egress port of an intermediate device in a network; receiving return probe packets returned by the destination device, where the return probe packets correspond to the probe packets received by the destination device; and determining a data packet sending rate based on the received return probe packets, and sending a data packet at the data packet sending rate.
Abstract:
The technology of this application relates to a communication method and apparatus. The communication method includes a data sending device obtaining to-be-sent data of a locally installed first application, where a destination address of the to-be-sent data is an address of a data receiving device. The data sending device determines an address or addresses of one or more network adapters of the data receiving device based on the address of the data receiving device, the data sending device encapsulates the to-be-sent data based on the network adapter address to obtain encapsulated data, and the data sending device sends the encapsulated data to the data receiving device through a plurality of links. A destination address of each link is one of network adapter addresses of the data receiving device, and a source address of each link is one of network adapter addresses of the data sending device.
Abstract:
Embodiments of the present disclosure provide a packet forwarding method and a physical host. The physical host includes a first virtual switch and at least two virtual machines, each virtual machine in the at least two virtual machines has a shared memory area that can be jointly accessed by the physical host, each shared memory area has a first memory pool, each first memory pool has at least one memory block, a memory block in each first memory pool has an index field that is used to identify a virtual machine to which the memory block belongs, and a first shared memory area corresponding to a first virtual machine in the at least two virtual machines is prohibited from being accessed by another virtual machine different from the first virtual machine in the at least two virtual machines.
Abstract:
The present invention provides an Ethernet performance detection method and system and an optical network terminal. The method includes: receiving, by an ONT, a detection configuration instruction, and configuring a maintenance end point according to the detection configuration instruction; configuring a performance detection path from the MEP to the VMEP according to states of the virtual maintenance end points configured on a main node and a backup node, wherein the VMEP includes two virtual MEPs, the IDs of the two virtual MEPs are identical, at the same moment; and transmitting a message to a node corresponding to the virtual MEP with the main state. When the node states of the main node and backup node are switched, the optical network terminal may automatically switch the performance detection path to the node corresponding to the virtual MEP, in order to continue to perform the network performance detection.
Abstract:
Embodiments of this application disclose a data transmission method and a communication apparatus. The method may be applied to a data transmission or data distribution scenario in a peer-to-peer P2P network. The method includes: A first end node sends delayed transmission information to a second end node, where the delayed transmission information indicates time information at which the first end node delays sending first data to the second end node. When determining that the second end node accepts the delayed transmission information, the first end node delays sending the first data to the second end node, to alleviate a load of the first end node, so that a network congestion problem caused by a large-scale concurrent data request can be resolved.
Abstract:
A method for implementing a GRE tunnel is provided. The access device obtains an address of an aggregation gateway group including at least one aggregation gateway. The access device sends a tunnel setup request in which an address of the access device is encapsulated by using the address of the aggregation gateway group as a destination address. The tunnel setup request is used to request for setting up a GRE tunnel. The access device receives a tunnel setup accept response sent back by an aggregation gateway and obtains an address of the aggregation gateway from the response. The aggregation gateway belongs to the aggregation gateway group. The access device configures the address of the aggregation gateway as a network side destination address of the GRE tunnel. A dynamic setup of a GRE tunnel on an access network that uses an aggregation technology is implemented.
Abstract:
In an embodiment, a source node assigns a first number to a probe data packet in a probe data flow in a sending sequence, where the first number is used to select a transmission path for the probe data packet. The source node sends the probe data packet in the probe data flow to a destination node at a first sending rate. Each time the source node receives a probe data packet backhauled by the destination node, the source node sends a service data packet in a service data flow, where the service data packet is assigned a second number corresponding to the probe data packet, and the second number is used to select a transmission path for the service data packet.
Abstract:
Embodiments of the present invention provide a packet processing method and a device, where the packet processing method includes: receiving, by an aggregation gateway, a first tunnel establishment request message sent by a home gateway, and sending a first tunnel establishment success message to the home gateway; receiving, by the aggregation gateway, a second tunnel establishment request message sent by the home gateway, and sending a second tunnel establishment success message to the home gateway; associating, by the aggregation gateway, a first tunnel with a second tunnel according to an identifier of the home gateway; and sending a downlink packet to the home gateway by using the first tunnel and/or the second tunnel. The embodiments of the present invention may increase bandwidth.