Managing congestion in a network
    24.
    发明授权

    公开(公告)号:US10944660B2

    公开(公告)日:2021-03-09

    申请号:US16517408

    申请日:2019-07-19

    Abstract: Examples described herein include configuration of a transmitting network device to identify a source queue-pair identifier in at least some of the packets that are transmitted to an endpoint destination. A network device that receives packets and experiences congestion can determine if a congestion causing packet includes a source queue-pair identifier. If the congestion causing packet includes a source queue-pair identifier, the network device can form and transmit a congestion notification message with a copy of the source queue-pair identifier to the transmitting network device. The transmitting network device can access a context for the congestion causing packet using the source queue-pair identifier without having to perform a lookup to identify the context.

    Systems and methods for multi-architecture computing

    公开(公告)号:US10713213B2

    公开(公告)日:2020-07-14

    申请号:US15386919

    申请日:2016-12-21

    Abstract: Systems and methods for multi-architecture computing. Some computing devices may include: a processor system including at least one first processing core having a first instruction set architecture (ISA), and at least one second processing core having a second ISA different from the first ISA; and a memory device coupled to the processor system, wherein the memory device has stored thereon a first binary representation of a program for the first ISA and a second binary representation of the program for the second ISA, and the memory device has stored thereon data for the program having an in-memory representation compatible with both the first ISA and the second ISA.

    Techniques for forwarding or receiving data segments associated with a large data packet

    公开(公告)号:US10341230B2

    公开(公告)日:2019-07-02

    申请号:US15626644

    申请日:2017-06-19

    Abstract: Examples are disclosed for forwarding or receiving data segments associated with a large data packets. In some examples, a large data packet may be segmented into a number of data segments having separate headers that include identifiers to associate the data segments with the large data packet. The data segments with separate headers may then be forwarded from a network node via a communication channel. In other examples, the data segments with separate headers may be received at another network node and then recombined to form the large data packet at the other network node. Other examples are described and claimed.

    Technologies for providing FPGA infrastructure-as-a-service computing capabilities

    公开(公告)号:US10275558B2

    公开(公告)日:2019-04-30

    申请号:US15344923

    申请日:2016-11-07

    Abstract: Technologies for providing FPGA infrastructure-as-a-service include a computing device having an FPGA, scheduler logic, and design loader logic. The scheduler logic selects an FPGA application for execution and the design loader logic loads a design image into the FPGA. The scheduler logic receives a ready signal from the FGPA in response to loading the design and sends a start signal to the FPGA application. The FPGA executes the FPGA application in response to sending the start signal. The scheduler logic may time-share the FPGA among multiple FPGA applications. The computing device may include signaling logic to manage signals between a user process and the FPGA application and DMA logic to manage bulk data transfer between the user process and the FPGA application. The computing device may include a user process linked to an FGPA library executed by a processor of the computing device. Other embodiments are described and claimed.

    TECHNOLOGIES FOR FLEXIBLE VIRTUAL FUNCTION QUEUE ASSIGNMENT

    公开(公告)号:US20190102317A1

    公开(公告)日:2019-04-04

    申请号:US15720954

    申请日:2017-09-29

    Abstract: Technologies for I/O device virtualization include a computing device with an I/O device that includes a physical function, multiple virtual functions, and multiple assignable resources, such as I/O queues. The physical function assigns an assignable resource to a virtual function. The computing device configures a page table mapping from a virtual function memory page located in a configuration space of the virtual function to a physical function memory page located in a configuration space of the physical function. The virtual function memory page includes a control register for the assignable resource, and the physical function memory page includes another control register for the assignable resource. A value may be written to the control register in the virtual function memory page. A processor of the computing device translates the virtual function memory page to the physical function memory page using the page mapping. Other embodiments are described and claimed.

    SIMULTANEOUS MULTITHREADING WITH CONTEXT ASSOCIATIONS

    公开(公告)号:US20190050270A1

    公开(公告)日:2019-02-14

    申请号:US16007330

    申请日:2018-06-13

    Abstract: Disclosed herein are systems, devices, and methods for simultaneous multithreading (SMT) with context associations. For example, in some embodiments, a computing device may include: one or more physical cores; and SMT logic to manage multiple logical cores per physical core such that operations of a first computing context are to be executed by a first logical core associated with the first computing context and operations of a second computing context are to be executed by a second logical core associated with the second computing context, wherein the first logical core and the second logical core share a common physical core.

Patent Agency Ranking