DATA-FAST-DISTRIBUTION METHOD AND DEVICE
    1.
    发明公开
    DATA-FAST-DISTRIBUTION METHOD AND DEVICE 有权
    用于快速数据分配的方法和设备

    公开(公告)号:EP2755363A4

    公开(公告)日:2014-12-31

    申请号:EP11867401

    申请日:2011-10-27

    发明人: FANG FAN CHEN KEPING

    摘要: The present invention provides a method and an apparatus for rapid data distribution, to reduce utilization of a central processing unit and a memory, and improve system performance. The method includes: sending, by a central processing unit, data description information to a rapid forwarding module, where the data description information includes an address and length information of data requested by a user; reading, by the rapid forwarding module according to the data description information, the data requested by the user and forwarding the data requested by the user to a network interface controller; and sending, by the network interface controller, the data requested by the user to the user. In comparison with the prior art, by using the method provided in embodiments of the present invention, after services are increased (for example, existing bandwidth of a network adapter and existing storage capacity need to be expanded), only devices including the network interface controller and a storage device need to be added, and cost for the memory and the central processing unit does not need to be increased. On the one hand, utilization of the memory and the central processing unit can be reduced, and performance of an entire system is improved; on the other hand, the problem of a memory wall is avoided.

    THREAD CREATION METHOD, SERVICE REQUEST PROCESSING METHOD AND RELATED DEVICE
    3.
    发明公开
    THREAD CREATION METHOD, SERVICE REQUEST PROCESSING METHOD AND RELATED DEVICE 审中-公开
    螺纹生产过程中,服务请求处理相关设备

    公开(公告)号:EP3073374A4

    公开(公告)日:2016-12-07

    申请号:EP14873894

    申请日:2014-12-18

    IPC分类号: G06F9/46 G06F9/48 G06F11/30

    摘要: The present invention discloses a thread creation method, a service request processing method, and a related device, where the method includes: acquiring a quantity of network interface card queues of a multi-queue network interface card of a server; creating processes whose quantity is equal to the quantity of network interface card queues; creating one listener thread and multiple worker threads in each process; and binding each created listener thread to a different network interface card queue. Solutions provided in embodiments of the present invention are used to make creation of a process and a thread more proper, and improve efficiency of processing parallel service requests by a server.

    DATA SPLITTING METHOD AND SPLITTER
    4.
    发明公开
    DATA SPLITTING METHOD AND SPLITTER 有权
    数据分离流程和数据TRENNER

    公开(公告)号:EP3079313A4

    公开(公告)日:2016-11-30

    申请号:EP14875040

    申请日:2014-12-18

    摘要: The present invention relates to a data distribution method and a splitter. The data distribution method includes: parsing, by the splitter, a received data packet to determine a transport layer communications protocol to which the data packet belongs; acquiting, by the splitter from the data packet, identification information of a data stream to which the data packet that corresponds to the determined transport layer communications protocol belongs; acquiting, by the splitter from the memory according to the correspondence between a transport layer communications protocol and an distribution table, an distribution table corresponding to the transport layer communications protocol to which the data packet belongs; determining, by the splitter according to a correspondence between identification information of a data stream and a thread in the distribution table corresponding to the transport layer communications protocol to which the data packet belongs, a thread corresponding to the data stream to which the data packet belongs; and sending, by the splitter, the data packet to a cache queue of the thread corresponding to the data stream, so that the thread corresponding to the data stream acquires the data packet from the cache queue.