Abstract:
A “rounding to zero” method can maintain the exact motion vector and can also be achieved by the method without division so as to improve the precision of calculating the motion vector, embody the motion of the object in video more factually, and obtain the more accurate motion vector prediction. Combining with the forward prediction coding and the backward prediction coding, the present invention realizes a new prediction coding mode, which can guarantee the high efficiency of coding in direct mode as well as is convenient for hardware realization, and gains the same effect as the conventional B frame coding.
Abstract:
Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.
Abstract:
The present invention provides a fractal tree structure-based data transmit device and method, a control device, and an intelligent chip. The device comprises: a central node that is as a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; the plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data; the central node, the forwarder modules and the plurality of leaf nodes are connected in the fractal tree network structure, and the central node is directly connected to M the forwarder modules and/or leaf nodes, any the forwarder module is directly connected to M the next level forwarder modules and/or leaf nodes.
Abstract:
One example of a device comprises: a central node that is as a communication data center of a network-on-chip; a plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder module, a communication structure constituted by each group of leaf nodes has self-similarity, and the plurality of leaf nodes are in communication connection with the central node in a complete multi-way tree approach by means of the forwarder modules of multiple levels.
Abstract:
The present invention discloses a method for encoding a flag of an image while encoding an I Frame, firstly setting a start code of an I Frame picture to be coded, for marking a start of the I Frame; setting a flag for indicating whether to code an identification field; judging the set flag, and if the flag indicates to encode the identification field of time and control code of a video tape recorder, encoding the identification field of time and control code of the video tape recorder, otherwise, not encoding the identification field of time and control code of the video tape recorder. In the present invention, the start code is added into the prediction picture header for marking the start of one frame picture data, as well as identifying whether there is the time_code identification field in the picture by the flag information of the time_code identification field, which can realize the objective of identifying the time_code identification field, and avoid encoding additional identification information, therefore it improves coding efficiency, and can be applied to all kinds of video/audio technical standards.
Abstract:
A content lock firewall method based on a white list includes performing semantic parsing on the payload of a data packet received by a website to obtain parsed texts of the received data packet, and matching the parsed texts of the data packet received by the website with a text pattern library to decide whether to forward or intercept the received data packet, the text pattern library comprising a plurality of text patterns, and each text pattern includes a sequence of keywords and a value range of each keyword. For the website with a relatively fixed function, through deployment of the firewall, known and new network attacks may be effectively defended, and the website may run with vulnerability under the condition of ensuring normal functions, without expensive upgrading.
Abstract:
The embodiments of the present disclosure provide an information processing method, an information processing apparatus, an electronic device, and a storage medium. The method is applied to a key-value storage system for key-value separation, a storage unit in the key-value storage system includes a key partition used for storing LSMT information and a plurality of storage partitions used for storing key-value information, and the method includes: selecting, according to the LSMT information in the key partition, a target storage partition with a highest invalid information rate from the storage partitions; detecting validity information corresponding to each key-value information in the target storage partition, and screening out valid key-value information according to the validity information corresponding to each key-value information; and transferring and storing the valid key-value information to a first storage partition except for the target storage partition, and erasing multiple key-value information stored in the target storage partition.
Abstract:
Disclosed embodiments relate to a convolutional neural network computing method and system based on weight kneading, comprising: arranging original weights in a computation sequence and aligning by bit to obtain a weight matrix, removing slack bits in the weight matrix, allowing essential bits in each column of the weight matrix to fill the vacancies according to the computation sequence to obtain an intermediate matrix, removing null rows in the intermediate matrix, obtain a kneading matrix, wherein each row of the kneading matrix serves as a kneading weight; obtaining positional information of the activation corresponding to each bit of the kneading weight; divides the kneading weight by bit into multiple weight segments, processing summation of the weight segments and the corresponding activations according to the positional information, and sending a processing result to an adder tree to obtain an output feature map by means of executing shift-and-add on the processing result.
Abstract:
The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.
Abstract:
The present disclosure discloses an adder device, a data accumulation method and a data processing device. The adder device comprises: a first adder module provided with an adder tree unit, composed of a multi-stage adder array, and a first control unit, wherein the adder tree unit accumulates data by means of step-by-step accumulation based on a control signal of the first control unit; a second adder module comprising a two-input addition/subtraction operation unit and a second control unit, and used for performing an addition or subtraction operation on input data; a shift operation module for performing a left shift operation on output data of the first adder module; an AND operation module for performing an AND operation on output data of the shift operation module and output data of the second adder module; and a controller module.