Abstract:
A hardware acceleration method includes obtaining compilation policy information and a source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing a code segment in the source code according to the compilation policy information; determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; compiling the first code segment into a first executable code; sending the first executable code to the first processor; compiling the second code segment into a second executable code; and sending the second executable code to the second processor.
Abstract:
A sample data annotation system includes an edge node and a central node. The edge node obtains a key feature of sample data, determines, based on the key feature, whether the sample data is unknown sample data, when the sample data is unknown sample data, performs annotation processing on the sample data to obtain a first annotation result, and sends the first annotation result to the central node. The central node receives the first annotation result, and determines whether the first annotation result indicates successful annotation; and when the first annotation result indicates that the unknown sample data is successfully annotated, performs consistency processing on the first annotation result to obtain a second annotation result, or when the annotation result indicates that the unknown sample data fails to be annotated, performs annotation processing on the unknown sample data to obtain a third annotation result.
Abstract:
A method and an apparatus for updating an application identification model, and a storage medium are provided. A client device may determine a plurality of training samples based on identification results of a plurality of pieces of data traffic, and train an application identification model using the training samples. Then, the client device may upload model data of the trained application identification model to a server, and the server performs joint update based on the model data uploaded by a plurality of client devices. Then, the client device may obtain a jointly updated application identification model based on jointly updated model data delivered by the server.
Abstract:
An accelerator loading apparatus obtains an acceleration requirement, where the acceleration requirement includes an acceleration function and acceleration performance of a to-be-created virtual machine, determines an image that meets the acceleration function and the acceleration performance, and determines a target host in which an available accelerator that can load the image is located, and then sends an image loading command to the target host. The image loading command includes a descriptor of the image, and is used to enable the target host to load the image for the available accelerator. In the method, a target host that can create the virtual machine may be determined based on the acceleration function and the acceleration performance of the to-be-created virtual machine, and an image used for acceleration is loaded to an available accelerator of the target host, to implement dynamic accelerator loading and deployment.
Abstract:
An accelerator loading apparatus obtains an acceleration requirement, where the acceleration requirement includes an acceleration function of a to-be-created virtual machine and acceleration performance of the to-be-created virtual machine. The accelerator loading apparatus determines a target accelerator that meets the acceleration function of the to-be-created virtual machine and the acceleration performance of the to-be-created virtual machine. The accelerator loading apparatus determines an image corresponding to the target accelerator, and sends an image loading command to a target host in which the target accelerator is located, where the image loading command is used to enable the target host to load the image for the target accelerator based on the image loading command.
Abstract:
A method for implementing fault detection includes: instructing, by a detection device, a detected device to configure a detected path and a return path, where the detected path is a path from a first physical port of the detected device to a second physical port of the detected device via a target unit of the detected device, the return path is a path from the second physical port to the detection device, and the target unit is a VNF or an accelerator; sending a detection packet to the detected device through the first physical port; and when receiving the detection packet transmitted through the detected path and the return path, determining that the detected path is not faulty. According to the method, it can be further determined that the path that passes through the VNF or the accelerator is not faulty.
Abstract:
A hardware acceleration method, a compiler, and a device, to improve code execution efficiency and implement hardware acceleration. The method includes: obtaining, by a compiler, compilation policy information and source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing, by the compiler, a code segment in the source code according to the compilation policy information, and determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; and compiling, by the compiler, the first code segment into first executable code, and sending the first executable code to the first processor; and compiling the second code segment into second executable code, and sending the second executable code to the second processor.
Abstract:
The present invention discloses a method and an apparatus for implementing acceleration processing on a VNF. In the present invention, an acceleration request of performing acceleration processing on a virtualized network function VNF is received; a hardware acceleration device capable of performing acceleration processing on the VNF is determined according to the acceleration request; and an acceleration resource of the hardware acceleration device is allocated to the VNF, so as to perform acceleration processing on the VNF. According to the present invention, a corresponding hardware acceleration device can be dynamically selected for and allocated to a VNF, implementing virtualized management on the hardware acceleration device, and improving resource utilization.
Abstract:
The present disclosure provides a parameter acquisition method and device for general protocol parsing and a general protocol parsing method and device. The method includes: acquiring a message to be parsed; according to a preset state transition table, performing regular expression matching on the message to be parsed, and acquiring a state number and location information of a character corresponding to a matched matching rule; and acquiring the matching rule corresponding to the state number according to a preset rule matching table, and outputting a required field according to the matching rule, the location information, and the buffered message to be parsed, where the matching rule is an initial point sub-rule or an end point sub-rule. Embodiments of the present disclosure may implement general parsing on the protocol.
Abstract:
A data stream classification method includes obtaining, based on a packet feature of a current data stream and a behavior classification model, at least one first confidence that is of the current data stream and that corresponds to at least one data stream class, where the behavior classification model is based on a plurality of data stream samples; obtaining, based on feature information of the current data stream and a content classification model, at least one second confidence that is of the current data stream and that corresponds to the at least one data stream class, where the content classification model is based on one or more historical data streams; and determining a data stream class of the current data stream based on the at least one first confidence and the at least one second confidence.