Queue management method and apparatus

    公开(公告)号:US10951551B2

    公开(公告)日:2021-03-16

    申请号:US16233894

    申请日:2018-12-27

    摘要: A queue management method and apparatus are disclosed. The queue management method includes: storing a first packet to a first buffer cell included in a first macrocell, where the first macrocell is enqueued to a first entity queue, the first macrocell includes N consecutive buffer cells, and the first buffer cell belongs to the N buffer cells; correcting, based on a packet length of the first packet, an average packet length in the first macrocell that is obtained before the first packet is stored, to obtain a current average packet length in the first macrocell; and generating, based on the first macrocell and the first entity queue, queue information corresponding to the first macrocell of the first macrocell in the first entity queue, a head pointer in the first macrocell, a tail pointer in the first macrocell, and the current average packet length in the first macrocell.

    PACKET PROCESSING METHOD AND RELATED DEVICE

    公开(公告)号:US20210051118A1

    公开(公告)日:2021-02-18

    申请号:US17087087

    申请日:2020-11-02

    摘要: A packet processing method and device are provided, to save CPU resources consumed by parsing a packet. The method includes: parsing, by an intelligent network interface card, a received first packet to obtain an identifier of the first packet; updating, by the intelligent network interface card, a control field of a first memory buffer based on the identifier of the first packet; storing, by the intelligent network interface card, a payload of the first packet or a packet header and a payload of the first packet into the first address space through DMA based on an aggregation position of the first packet; aggregating, by a host, the first address information and at least one piece of second address information based on an updated control field in the first mbuf; and reading, by a virtual machine, address information, to obtain data in an address space indicated by the address information.

    TECHNOLOGIES FOR JITTER-ADAPTIVE LOW-LATENCY, LOW POWER DATA STREAMING BETWEEN DEVICE COMPONENTS

    公开(公告)号:US20200092185A1

    公开(公告)日:2020-03-19

    申请号:US16682291

    申请日:2019-11-13

    申请人: Intel Corporation

    摘要: Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.

    Method and apparatus for using multiple linked memory lists

    公开(公告)号:US10484311B2

    公开(公告)日:2019-11-19

    申请号:US14675450

    申请日:2015-03-31

    申请人: CAVIUM, LLC

    摘要: An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.

    Data enqueuing method, data dequeuing method, and queue management circuit

    公开(公告)号:US10326713B2

    公开(公告)日:2019-06-18

    申请号:US15883465

    申请日:2018-01-30

    发明人: Yalin Bao

    摘要: The disclosure describes a data enqueuing method. The method may include: receiving a to-be-enqueued data packet, dividing the data packet into several slices to obtain slice information of the slices, and marking a tail slice of the data packet with a tail slice identifier; enqueuing corresponding slice information according to an order of the slices in the data packet, and in a process of enqueuing the corresponding slice information, if a slice is marked with the tail slice identifier, determining that the slice is the tail slice of the data packet, and generating a first-type node; and determining whether a target queue is empty, and if the target queue is empty, writing slice information of the tail slice into the target queue, and updating a head pointer of a queue head list according to the first-type node.

    PACKET DESCRIPTOR STORAGE IN PACKET MEMORY WITH CACHE

    公开(公告)号:US20190173809A1

    公开(公告)日:2019-06-06

    申请号:US16266968

    申请日:2019-02-04

    摘要: A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

    QUEUE MANAGEMENT METHOD AND APPARATUS
    77.
    发明申请

    公开(公告)号:US20190132262A1

    公开(公告)日:2019-05-02

    申请号:US16233894

    申请日:2018-12-27

    摘要: A queue management method and apparatus are disclosed. The queue management method includes: storing a first packet to a first buffer cell included in a first macrocell, where the first macrocell is enqueued to a first entity queue, the first macrocell includes N consecutive buffer cells, and the first buffer cell belongs to the N buffer cells; correcting, based on a packet length of the first packet, an average packet length in the first macrocell that is obtained before the first packet is stored, to obtain a current average packet length in the first macrocell; and generating, based on the first macrocell and the first entity queue, queue information corresponding to the first macrocell of the first macrocell in the first entity queue, a head pointer in the first macrocell, a tail pointer in the first macrocell, and the current average packet length in the first macrocell.

    REORDERING OF DATA FOR PARALLEL PROCESSING
    78.
    发明申请

    公开(公告)号:US20190097951A1

    公开(公告)日:2019-03-28

    申请号:US15719081

    申请日:2017-09-28

    申请人: Intel Corporation

    IPC分类号: H04L12/861 H04L12/883

    摘要: A network interface device, including: an ingress interface; a host platform interface to communicatively couple to a host platform; and a packet preprocessor including logic to: receive via the ingress interface a data sequence including a plurality of discrete data units; identify the data sequence as data for a parallel processing operation; reorder the discrete data units into a reordered data frame, the reordered data frame configured to order the discrete data units for consumption by the parallel operation; and send the reordered data to the host platform via the host platform interface.

    METHOD FOR DISPLAYING AN ANIMATION DURING THE STARTING PHASE OF AN ELECTRONIC DEVICE AND ASSOCIATED ELECTRONIC DEVICE

    公开(公告)号:US20190087200A1

    公开(公告)日:2019-03-21

    申请号:US16080394

    申请日:2017-02-27

    发明人: Julien BELLANGER

    摘要: A method for displaying an animation by a display chip of an electronic device, which includes a non-volatile memory and a random-access memory. The display chip includes a video output register and a display register. The method includes a first static programming phase including configuring the video output register; writing n images in the memory, n being an integer higher than or equal to two; writing into the memory of a plurality of nodes, such that each node includes the address in the memory of at least one portion of an image, as well as the address of the following node in the memory, the last node including the address in the random-access memory of the first node; and configuring the display register. The method also includes a second phase in which the n images are read by the display chip by the display register, to display the animation.

    Node device used in disruption/delay/disconnect tolerant network and communication method

    公开(公告)号:US09985751B2

    公开(公告)日:2018-05-29

    申请号:US15109482

    申请日:2015-01-27

    申请人: NEC Corporation

    摘要: A node device (1A) receives a second ACK list in communication with an adjacent node (1B) and updates a first ACK list and a first summary vector on the basis of the second ACK list. The first summary vector, which indicates messages stored in a message buffer of the node device (1A), is transmitted to the adjacent node (1B) prior to transmission of the messages in the communication with the adjacent node (1B). The first ACK list indicates ACK messages recognized by the node device (1A). The second ACK list indicates ACK messages recognized by the adjacent node (1B). Each ACK message represents a message of arrival at an ultimate destination node via a DTN (100). In this way, for example, a copy having the same content as a message having arrived at an ultimate destination node can be prevented from being scattered in the Disruption Tolerant Network (DTN).