PacketUsher: Accelerating Computer-Intensive Packet Processing

    公开(公告)号:US20170163538A1

    公开(公告)日:2017-06-08

    申请号:US14972062

    申请日:2015-12-16

    CPC classification number: H04L47/125 H04L45/24 H04L47/2483 H04L69/22

    Abstract: Compute-intensive packet processing (CIPP) in a computer system comprising a programmable computing platform is accelerated by using a packet I/O engine, implemented on the platform, to perform packet I/O functions, where the packet I/O engine is configured to achieve direct access to a network interface card (NIC) from a user application. For a Linux-based computer system, standard I/O mechanisms of Linux are bypassed and only the packet I/O engine is used in performing the I/O functions. Furthermore, the computer system is configured to: process a batch of packets, instead of packet by packet, in every function call; and when moving a packet between a buffer of an individual user application and a queue of the packet I/O engine, copy a packet descriptor of the packet instead the entire packet. In addition, workflows across different working threads are balanced and parallelism is exploited to fully utilize resources of the platform.

    PacketUsher: accelerating computer-intensive packet processing

    公开(公告)号:US09961002B2

    公开(公告)日:2018-05-01

    申请号:US14972062

    申请日:2015-12-16

    CPC classification number: H04L47/125 H04L45/24 H04L47/2483 H04L69/22

    Abstract: Compute-intensive packet processing (CIPP) in a computer system comprising a programmable computing platform is accelerated by using a packet I/O engine, implemented on the platform, to perform packet I/O functions, where the packet I/O engine is configured to achieve direct access to a network interface card (NIC) from a user application. For a Linux-based computer system, standard I/O mechanisms of Linux are bypassed and only the packet I/O engine is used in performing the I/O functions. Furthermore, the computer system is configured to: process a batch of packets, instead of packet by packet, in every function call; and when moving a packet between a buffer of an individual user application and a queue of the packet I/O engine, copy a packet descriptor of the packet instead the entire packet. In addition, workflows across different working threads are balanced and parallelism is exploited to fully utilize resources of the platform.

    High-efficient packet I/O engine for commodity PC

    公开(公告)号:US10001930B2

    公开(公告)日:2018-06-19

    申请号:US14972047

    申请日:2015-12-16

    Abstract: A method for implementing a packet I/O engine on a programmable computing platform is provided, where the engine performs I/O functions for plural threads generated by a plurality of user applications. In the method, the platform is configured such that only one thread is permitted to initialize and configure the resources. Furthermore, I/O-device queues each for buffering packets either transmitted to or received from an individual external I/O device are set up. For a plurality of unsafe I/O-device queues determined, among the I/O-device queues, to be multi-thread unsafe, a plurality of multi-producer, multi-consumer software queues for buffering packets delivered between the plurality of the unsafe I/O-device queues and the plurality of user applications is set up. In particular, the plurality of multi-producer, multi-consumer software queues is configured such that the unsafe I/O-device queues are collectively synchronized to maintain data consistency in packet delivery in the presence of multiple threads.

    High-Efficient Packet I/O Engine for Commodity PC

    公开(公告)号:US20170160954A1

    公开(公告)日:2017-06-08

    申请号:US14972047

    申请日:2015-12-16

    Abstract: A method for implementing a packet I/O engine on a programmable computing platform is provided, where the engine performs I/O functions for plural threads generated by a plurality of user applications. In the method, the platform is configured such that only one thread is permitted to initialize and configure the resources. Furthermore, I/O-device queues each for buffering packets either transmitted to or received from an individual external I/O device are set up. For a plurality of unsafe I/O-device queues determined, among the I/O-device queues, to be multi-thread unsafe, a plurality of multi-producer, multi-consumer software queues for buffering packets delivered between the plurality of the unsafe I/O-device queues and the plurality of user applications is set up. In particular, the plurality of multi-producer, multi-consumer software queues is configured such that the unsafe I/O-device queues are collectively synchronized to maintain data consistency in packet delivery in the presence of multiple threads.

Patent Agency Ranking