Method for obtaining image reference block in a code of mode of fixed reference frame number
    11.
    发明授权
    Method for obtaining image reference block in a code of mode of fixed reference frame number 有权
    用于在固定参考帧号的模式码中获取图像参考块的方法

    公开(公告)号:US07974344B2

    公开(公告)日:2011-07-05

    申请号:US10584777

    申请日:2004-07-08

    CPC classification number: H04N19/577

    Abstract: A “rounding to zero” method can maintain the exact motion vector and can also be achieved by the method without division so as to improve the precision of calculating the motion vector, embody the motion of the object in video more factually, and obtain the more accurate motion vector prediction. Combining with the forward prediction coding and the backward prediction coding, the present invention realizes a new prediction coding mode, which can guarantee the high efficiency of coding in direct mode as well as is convenient for hardware realization, and gains the same effect as the conventional B frame coding.

    Abstract translation: “舍入到零”方法可以保持精确的运动矢量,也可以通过不分割的方法来实现,以提高计算运动矢量的精度,更实际地体现视频中物体的运动,并获得更多 准确的运动矢量预测。 结合前向预测编码和后向预测编码,本发明实现了一种新的预测编码模式,可以保证直接模式下的编码效率高,便于硬件实现,并获得与常规编码相同的效果 B帧编码。

    Weight data storage method and neural network processor based on the method

    公开(公告)号:US11531889B2

    公开(公告)日:2022-12-20

    申请号:US16762810

    申请日:2018-02-28

    Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.

    FRACTAL TREE STRUCTURE-BASED DATA TRANSMIT DEVICE AND METHOD, CONTROL DEVICE, AND INTELLIGENT CHIP

    公开(公告)号:US20210075639A1

    公开(公告)日:2021-03-11

    申请号:US17100570

    申请日:2020-11-20

    Abstract: The present invention provides a fractal tree structure-based data transmit device and method, a control device, and an intelligent chip. The device comprises: a central node that is as a communication data center of a network-on-chip and used for broadcasting or multicasting communication data to a plurality of leaf nodes; the plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; and forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data; the central node, the forwarder modules and the plurality of leaf nodes are connected in the fractal tree network structure, and the central node is directly connected to M the forwarder modules and/or leaf nodes, any the forwarder module is directly connected to M the next level forwarder modules and/or leaf nodes.

    Fractal tree structure-based data transmit device and method, control device, and intelligent chip

    公开(公告)号:US10904034B2

    公开(公告)日:2021-01-26

    申请号:US15781608

    申请日:2016-06-17

    Abstract: One example of a device comprises: a central node that is as a communication data center of a network-on-chip; a plurality of leaf nodes that are as communication data nodes of the network-on-chip and for transmitting the communication data to a central leaf node; forwarder modules for connecting the central node with the plurality of leaf nodes and forwarding the communication data, wherein the plurality of leaf nodes are divided into N groups, each group having the same number of leaf nodes, the central node is individually in communication connection with each group of leaf nodes by means of the forwarder module, a communication structure constituted by each group of leaf nodes has self-similarity, and the plurality of leaf nodes are in communication connection with the central node in a complete multi-way tree approach by means of the forwarder modules of multiple levels.

    Method for encoding the flag of the image
    15.
    发明授权
    Method for encoding the flag of the image 有权
    编码图像标志的方法

    公开(公告)号:US07941034B2

    公开(公告)日:2011-05-10

    申请号:US10572007

    申请日:2004-07-05

    CPC classification number: H04N9/8205 H04N9/8042

    Abstract: The present invention discloses a method for encoding a flag of an image while encoding an I Frame, firstly setting a start code of an I Frame picture to be coded, for marking a start of the I Frame; setting a flag for indicating whether to code an identification field; judging the set flag, and if the flag indicates to encode the identification field of time and control code of a video tape recorder, encoding the identification field of time and control code of the video tape recorder, otherwise, not encoding the identification field of time and control code of the video tape recorder. In the present invention, the start code is added into the prediction picture header for marking the start of one frame picture data, as well as identifying whether there is the time_code identification field in the picture by the flag information of the time_code identification field, which can realize the objective of identifying the time_code identification field, and avoid encoding additional identification information, therefore it improves coding efficiency, and can be applied to all kinds of video/audio technical standards.

    Abstract translation: 本发明公开了一种在编码I帧时对图像的标志进行编码的方法,首先设置要编码的I帧图像的起始码,以标记I帧的开始; 设置用于指示是否对识别字段进行编码的标志; 判断设定标志,如果该标志指示对视频磁带录像机的时间和控制代码的识别字段进行编码,则编码磁带录像机的时间和控制代码的识别字段,否则不对时间的识别字段进行编码 和录像机的控制代码。 在本发明中,将起始码添加到用于标记一帧图像数据的开始的预测图像头中,以及通过时间码识别字段的标志信息来识别图像中是否存在时间码识别字段,其中 可以实现识别time_code识别字段的目的,避免编码附加识别信息,提高编码效率,可应用于各种视频/音频技术标准。

    WHITE LIST-BASED CONTENT LOCK FIREWALL METHOD AND SYSTEM

    公开(公告)号:US20250133060A1

    公开(公告)日:2025-04-24

    申请号:US17773166

    申请日:2021-06-04

    Abstract: A content lock firewall method based on a white list includes performing semantic parsing on the payload of a data packet received by a website to obtain parsed texts of the received data packet, and matching the parsed texts of the data packet received by the website with a text pattern library to decide whether to forward or intercept the received data packet, the text pattern library comprising a plurality of text patterns, and each text pattern includes a sequence of keywords and a value range of each keyword. For the website with a relatively fixed function, through deployment of the firewall, known and new network attacks may be effectively defended, and the website may run with vulnerability under the condition of ensuring normal functions, without expensive upgrading.

    INFORMATION PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

    公开(公告)号:US20250124012A1

    公开(公告)日:2025-04-17

    申请号:US18793745

    申请日:2024-08-02

    Abstract: The embodiments of the present disclosure provide an information processing method, an information processing apparatus, an electronic device, and a storage medium. The method is applied to a key-value storage system for key-value separation, a storage unit in the key-value storage system includes a key partition used for storing LSMT information and a plurality of storage partitions used for storing key-value information, and the method includes: selecting, according to the LSMT information in the key partition, a target storage partition with a highest invalid information rate from the storage partitions; detecting validity information corresponding to each key-value information in the target storage partition, and screening out valid key-value information according to the validity information corresponding to each key-value information; and transferring and storing the valid key-value information to a first storage partition except for the target storage partition, and erasing multiple key-value information stored in the target storage partition.

    Convolutional neural network computing method and system based on weight kneading

    公开(公告)号:US12271807B2

    公开(公告)日:2025-04-08

    申请号:US17250892

    申请日:2019-05-21

    Abstract: Disclosed embodiments relate to a convolutional neural network computing method and system based on weight kneading, comprising: arranging original weights in a computation sequence and aligning by bit to obtain a weight matrix, removing slack bits in the weight matrix, allowing essential bits in each column of the weight matrix to fill the vacancies according to the computation sequence to obtain an intermediate matrix, removing null rows in the intermediate matrix, obtain a kneading matrix, wherein each row of the kneading matrix serves as a kneading weight; obtaining positional information of the activation corresponding to each bit of the kneading weight; divides the kneading weight by bit into multiple weight segments, processing summation of the weight segments and the corresponding activations according to the positional information, and sending a processing result to an adder tree to obtain an output feature map by means of executing shift-and-add on the processing result.

    Method and system for processing neural network

    公开(公告)号:US11580367B2

    公开(公告)日:2023-02-14

    申请号:US16079525

    申请日:2016-08-09

    Abstract: The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.

Patent Agency Ranking