Abstract:
A data processing apparatus and a data processing method are provided. The apparatus includes M protocol stacks and at least one distribution service module, and the M protocol stacks separately run on different logic cores of a processor and are configured to independently perform protocol processing on a data packet to be processed. The distribution service module receives an input data packet from a network interface and sends the data packet to one of the M protocol stacks for protocol processing, and receives data packets processed by the M protocol stacks and sends the data packets outwards through the network interface. The present disclosure implements a function of parallel protocol processing by multiple processes in user space of an operating system in a multi-core environment by using a parallel processing feature of a multi-core system, thereby reducing resource consumption caused by data packet copying.
Abstract:
Embodiments of the present invention disclose a cooperative caching method and apparatus, relating to the field of network technologies, to improve the local hit ratio without increasing the local server costs. The technical solution provided in the present invention includes: obtaining, according to cache information, end-to-end delay between a local server and a neighbor server, and popularity in a cache list, a consolidated gain value of a cached video segment and a consolidated gain value of a candidate video segment in the local server; and replacing the cached video segment with the candidate video segment when the consolidated gain value of the cached video segment and the consolidated gain value of the candidate video segment in the local server meet a replacement condition.
Abstract:
A parameter inference method to solve a problem that precision of a Latent Dirichlet Allocation model is poor is provided. The method includes: calculating a Latent Dirichlet Allocation model according to a preset initial first hyperparameter, a preset initial second hyperparameter, a preset initial number of topics, a preset initial count matrix of documents and topics, and a preset initial count matrix of topics and words to obtain probability distributions; obtaining the number of topics, a first hyperparameter, and a second hyperparameter that maximize log likelihood functions of the probability distributions; and determining whether the number of topics, the first hyperparameter, and the second hyperparameter converge, and if not, putting the number of topics, the first hyperparameter, and the second hyperparameter into the Latent Dirichlet Allocation model until the optimal number of topics, an optimal first hyperparameter, and an optimal second hyperparameter that maximize the log likelihood functions of the probability distributions.