Abstract:
Embodiments of the present application relate to a method for implementing Turbo equalization compensation. The equalizer divides a first data block into n data segments, where D bits in two adjacent data segments in the n data segments overlap, performs recursive processing on each data segment in the n data segments, before the recursive processing, merges the n data segments to obtain a second data block; and performs iterative decoding on the second data block, to output a third data block, where data lengths of the first data block, the second data block, and the third data block are all 1/T of a code length of a LDPC convolutional code.
Abstract:
Embodiments relate to the communications field, and provide an adaptive modulation and coding method, apparatus, and system. The method includes: obtaining to-be-processed data; obtaining channel information corresponding to the to-be-processed data, and determining a modulation mode according to the channel information. The method also includes determining first data and second data from the to-be-processed data according to the modulation mode; performing soft decision forward error correction FEC coding on the first data to obtain a first bit stream. The method also includes obtaining a second bit stream according to the second data, and modulating the first bit stream and the second bit stream according to a constellation mapping rule; and sending modulated data.
Abstract:
This application discloses a modulation/demodulation and encoding/decoding method and belongs to the field of communication technologies. The modulation and encoding method includes: grading to-be-transmitted bits into a plurality of levels; encoding a plurality of levels of bits obtained through grading to obtain a plurality of levels of codewords; and mapping the plurality of levels of codewords to a symbol in a staggered manner, where the plurality of levels of codewords include a first codeword, the first codeword is located at a Yth level of the plurality of levels of codewords, and the first codeword overlaps at least one codeword at any level other than the Yth level. In this way, codewords at different levels are associated by using a symbol to which the codewords are mapped, and an overlapping part between a plurality of codewords can assist in demodulating the codewords.
Abstract:
The present disclosure relates to parameter updating methods. In one example method, a parameter in a neural network model is updated for a plurality of times through a plurality of iterations. The plurality of iterations include a first iteration period and a second iteration period. In the first iteration period, an inverse matrix of an additional matrix of the neural network model is updated once based on a quantity of iterations indicated by a first update stride. In the second iteration period, the inverse matrix of the additional matrix of the neural network model is updated once based on a quantity of iterations indicated by a second update stride, where the first iteration of the second iteration period is after the last iteration of the first iteration period in an iteration sequence, and the second update stride is greater than the first update stride.
Abstract:
The method includes: forwarding, based on path sets of nodes in a network, service data between a source node and a sink node in the network, where the path sets of the nodes in the network are determined by iteratively performing the following path set determining step: for each link in the network, obtaining a path set of a start node of the link, and determining N shortest paths from an end node of the link to the sink node; and for each path included in the path set of the start node, determining, according to the N shortest paths, the path, and the link, to add a new path formed by the path and the link into a path set of the end node.
Abstract:
Embodiments of this application relate to an intra-cluster node troubleshooting method and device. The method includes: obtaining fault detection topology information of a cluster, where the fault detection topology information includes a fault detection relationship between all nodes in the cluster; obtaining a fault indication message, where the fault indication message is used to indicate unreachability from a detection node to a detected node; determining a sub-cluster of the cluster based on the fault detection topology information and the fault indication message, where nodes that belong to different sub-clusters are unreachable to each other; and determining a working cluster based on the sub-cluster of the cluster. According to the embodiments of this application, available nodes in the cluster can be retained to a maximum extent at relatively low costs. In this way, a quantity of available nodes in the cluster is increased, high availability is ensured.
Abstract:
A method for estimating a traffic rate between a virtual machine pair, and a related device are provided. When a rate of traffic sent by a virtual machine vm-x1 to a virtual machine vm-y1 is estimated, reference is made to at least rates of sending traffic by N21 virtual machines deployed in N2 physical hosts and including the virtual machine vm-x1, rates of traffic sent by N1 switching devices to N4 switching devices, rates of receiving traffic of N31 virtual machines deployed in N3 physical hosts and including the virtual machine vm-y1, and rates of outgoing traffic of the N4 switching devices, thereby facilitating relatively accurate estimation of a traffic rate between a virtual machine pair.
Abstract:
A method for estimating a traffic rate between a virtual machine pair, and a related device are provided. When a rate of traffic sent by a virtual machine vm-x1 to a virtual machine vm-y1 is estimated, reference is made to at least rates of sending traffic by N21 virtual machines deployed in N2 physical hosts and including the virtual machine vm-x1, rates of traffic sent by N1 switching devices to N4 switching devices, rates of receiving traffic of N31 virtual machines deployed in N3 physical hosts and including the virtual machine vm-y1, and rates of outgoing traffic of the N4 switching devices, thereby facilitating relatively accurate estimation of a traffic rate between a virtual machine pair.
Abstract:
Embodiments of the present invention relate to a virtual machine integration technology, and in particular, to a method, an apparatus, and a system for virtual cluster integration. The method includes: performing a calculation through a search algorithm to obtain the minimum number of physical machines which are capable of accommodating all virtual machines in a virtual cluster, and obtaining all virtual integration solutions satisfying the minimum number of physical machines; then calculating CPU voltage consumption of each virtual integration solution, and selecting a solution with lowest CPU voltage consumption from these virtual integration solutions; and formulating a virtual integration migration policy according to the virtual integration solution with the lowest CPU voltage consumption. Therefore, through the embodiments of the present invention, a virtual integration solution with lower CPU voltage energy consumption can be obtained, thereby greatly improving an energy saving and emission reduction effect of a virtual cluster integration solution.
Abstract:
The present invention discloses a coding and decoding method, apparatus, and system for forward error correction, and pertains to the field of communications. The method includes: determining check matrix parameters of time-varying periodic LDPC convolutional code according to performance a transmission system, complexity of the transmission system, and a synchronization manner for code word alignment, constructing a QC-LDPC check matrix according to the determined check matrix parameters, and obtaining a check matrix (Hc) of the time-varying periodic LDPC convolutional code according to the QC-LDPC check matrix; de-blocking, according to requirements of the Hc, data to be coded, and coding data of each sub-block according to the Hc, so as to obtain multiple code words of the LDPC convolutional code; and adding the multiple code words of the LDPC convolutional code in a data frame and sending the data frame.