Abstract:
A network virtualization configuration method, a network system, and a device, where the method includes creating a switch virtual machine (VM), where the switch VM is configured to run a virtual switch, responding to a Peripheral Component Interconnect (PCI) scanning of the switch VM, configuring, using a physical function (PF) driver, a PCI Express (PCIE) device to allocate a corresponding network resource to the switch VM, and initializing the PCIE device using the PF driver, where a default forwarding rule of the initialized PCIE device includes setting a default forwarding port of the PCIE device to a VF receiving queue (VF 0) corresponding to the switch VM. Hence, a cross-platform virtual switch solution can be implemented, thereby improving flexibility of deploying a virtual switch, and implementing compatibility with different hypervisors/VM monitors (VMMs).
Abstract:
A method and an apparatus for processing a data packet based on parallel protocol stack instances and the method includes determining a distribution policy of a data packet according to distribution policy information of a network adapter, determining a first protocol stack instance according to the distribution policy of the data packet, and creating a target socket in the first protocol stack instance such that when the data packet is distributed to the first protocol stack instance, the first protocol stack instance performs protocol processing on the data packet using the target socket. Hence, a case in which a protocol stack instance that is specified for the target socket by an application conflicts with a protocol stack instance specified by a network adapter is avoided, and a technical problem that protocol processing cannot be performed on the data packet is resolved.
Abstract:
An address acquiring method includes receiving an address resolution request packet sent by a source host, where the address resolution request packet includes an Internet Protocol (IP) address of a destination host; determining another network virtualization edge (NVE) device, where the another NVE device stores a correspondence between the IP address of the destination host and a Media Access Control (MAC) address of the destination host and a correspondence between the IP address of the destination host and an IP address of a destination NVE device corresponding to the destination host; obtaining the MAC address of the destination host and the IP address of the destination NVE device corresponding to the destination host from the another NVE device according to the IP address of the destination host. The technical solutions provided in the present disclosure are intended to reduce processing pressure on a physical network.
Abstract:
Embodiments of the present invention disclose a method, an apparatus and a system for joint optimization. The method for joint optimization includes: decomposing the joint optimization of an entire network into the joint optimization performed in each sub-network, regarding a bandwidth requirement for a server off the sub-network as a bandwidth requirement for a virtual server on a port, iteratively performing the joint optimization in each sub-network, and applying results of the joint optimization in the network. In the embodiments of the present invention, the bandwidth requirement for the server off the sub-network is regarded as the bandwidth requirement for the virtual server on the port, the joint optimization is iteratively performed in each sub-network, and the results of the joint optimization are applied in the network, so that the joint optimization of the entire network is performed in parallel.
Abstract:
A method and device for processing an input/output (I/O) request in a network file system (NFS) includes sending, by a NFS server, a request for parsing the unidentifiable NFS FH to a centralized controller when a NFS file handle (NFS FH) in an I/O request cannot be identified, receiving, by the NFS server, a file identifier that corresponds to the unidentifiable NFS FH from the centralized controller according to the parsing request, where the file identifier is determined according to a pre-stored correspondence between NFS FHs and file identifiers, and processing, by the NFS server, the I/O request according to the file identifier.
Abstract:
A task processing method and virtual machine are disclosed. The method includes selecting an idle resource for a task; creating a global variable snapshot for a global variable; executing the task, in private memory space in the selected idle resource; after the execution of the task is complete, acquiring a new global variable snapshot corresponding to the global variable, and acquiring an updated global variable according to a local global variable snapshot and the new global variable snapshot; and determining whether a synchronization variable of a to-be-executed task in a task synchronization waiting queue includes the current updated global variable, and if the synchronization variable of the to-be-executed task in the task synchronization waiting queue includes the current updated global variable, putting the task into a task execution waiting queue.
Abstract:
A network virtualization configuration method, a network system, and a device, where the method includes creating a switch virtual machine (VM), where the switch VM is configured to run a virtual switch, responding to a Peripheral Component Interconnect (PCI) scanning of the switch VM, configuring, using a physical function (PF) driver, a PCI Express (PCIE) device to allocate a corresponding network resource to the switch VM, and initializing the PCIE device using the PF driver, where a default forwarding rule of the initialized PCIE device includes setting a default forwarding port of the PCIE device to a VF receiving queue (VF 0) corresponding to the switch VM. Hence, a cross-platform virtual switch solution can be implemented, thereby improving flexibility of deploying a virtual switch, and implementing compatibility with different hypervisors/VM monitors (VMMs).
Abstract:
A method and an apparatus for processing a data packet based on parallel protocol stack instances, where lower-layer protocol processing is performed, using a first protocol stack instance. An associated second protocol stack instance is determined using a target socket after the target socket that is needed to perform upper-layer protocol processing on the data packet is determined, and the upper-layer protocol processing is performed, using the target socket and the second protocol stack instance. The second protocol stack instance that performs the upper-layer protocol processing is determined using the target socket, and hence, a technical problem that protocol processing cannot be performed on a data packet because a protocol stack instance specified by an application (APP) conflicts with a protocol stack instance specified by a network adapter is resolved.
Abstract:
A parameter inference method to solve a problem that precision of a Latent Dirichlet Allocation model is poor is provided. The method includes: calculating a Latent Dirichlet Allocation model according to a preset initial first hyperparameter, a preset initial second hyperparameter, a preset initial number of topics, a preset initial count matrix of documents and topics, and a preset initial count matrix of topics and words to obtain probability distributions; obtaining the number of topics, a first hyperparameter, and a second hyperparameter that maximize log likelihood functions of the probability distributions; and determining whether the number of topics, the first hyperparameter, and the second hyperparameter converge, and if not, putting the number of topics, the first hyperparameter, and the second hyperparameter into the Latent Dirichlet Allocation model until the optimal number of topics, an optimal first hyperparameter, and an optimal second hyperparameter that maximize the log likelihood functions of the probability distributions.
Abstract:
An Ethernet multicast method and device are provided, which relate to the communications technology field, and improve the capability of distributing multicast data in the Ethernet. The method includes: receiving a request from a host/multicast source, wherein the request carries a multicast Media Access Control (MAC) address of a destination multicast group; selecting, according to the request, a switch satisfying a particular optimization condition as a multicast root node corresponding to the multicast MAC address of the destination multicast group when it is determined that the multicast MAC address of the destination multicast group does not have the corresponding multicast root node; transmitting an identification of the multicast root node to the host/multicast source. The embodiments of the present invention are mainly applied to the process of the multicast data distribution in the Ethernet.