Enforcing transaction order in peer-to-peer interactions

    公开(公告)号:US10248610B2

    公开(公告)日:2019-04-02

    申请号:US15177348

    申请日:2016-06-09

    Abstract: A method for computing includes submitting a first command from a central processing unit (CPU) to a first peripheral device in a computer to write data in a first bus transaction over a peripheral component bus in the computer to a second peripheral device in the computer. A second command is submitted from the CPU to one of the first and second peripheral devices to execute a second bus transaction, subsequent to the first bus transaction, that will flush the data from the peripheral component bus to the second peripheral device. The first and second bus transactions are executed in response to the first and second commands. Following completion of the second bus transaction, the second peripheral device processes the written data in.

    Low-latency processing in a network node

    公开(公告)号:US10218645B2

    公开(公告)日:2019-02-26

    申请号:US14247255

    申请日:2014-04-08

    Abstract: A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices.

    Enforcing transaction order in peer-to-peer interactions
    25.
    发明申请
    Enforcing transaction order in peer-to-peer interactions 审中-公开
    在对等互动中实施交易顺序

    公开(公告)号:US20160378709A1

    公开(公告)日:2016-12-29

    申请号:US15177348

    申请日:2016-06-09

    Abstract: A method for computing includes submitting a first command from a central processing unit (CPU) to a first peripheral device in a computer to write data in a first bus transaction over a peripheral component bus in the computer to a second peripheral device in the computer. A second command is submitted from the CPU to one of the first and second peripheral devices to execute a second bus transaction, subsequent to the first bus transaction, that will flush the data from the peripheral component bus to the second peripheral device. The first and second bus transactions are executed in response to the first and second commands. Following completion of the second bus transaction, the second peripheral device processes the written data in.

    Abstract translation: 一种用于计算的方法包括:将来自中央处理单元(CPU)的第一命令提交给计算机中的第一外围设备,以将计算机中的外围组件总线上的第一总线事务中的数据写入计算机中的第二外围设备。 第二命令从CPU提交到第一和第二外围设备中的一个,以在第一总线事务之后执行第二总线事务,其将数据从外围组件总线刷新到第二外围设备。 响应于第一和第二命令执行第一和第二总线事务。 在完成第二个总线事务之后,第二个外围设备处理写入的数据。

    Network-based computational accelerator
    26.
    发明申请
    Network-based computational accelerator 审中-公开
    基于网络的计算加速器

    公开(公告)号:US20160330112A1

    公开(公告)日:2016-11-10

    申请号:US15145983

    申请日:2016-05-04

    Abstract: A data processing device includes a first packet communication interface for communication with at least one host processor via a network interface controller (NIC) and a second packet communication interface for communication with a packet data network. A memory holds a flow state table containing context information with respect to multiple packet flows conveyed between the host processor and the network via the first and second interfaces packet communication interfaces. Acceleration logic, coupled between the first and second packet communication interfaces, performs computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.

    Abstract translation: 数据处理设备包括用于经由网络接口​​控制器(NIC)与至少一个主机处理器通信的第一分组通信接口和用于与分组数据网络通信的第二分组通信接口。 存储器保持流状态表,其包含关于经由第一和第二接口分组通信接口在主处理器和网络之间传送的多个分组流的上下文信息。 耦合在第一和第二分组通信接口之间的加速逻辑使用流状态表中的上下文信息对多个分组流中的分组的有效载荷执行计算操作。

    Collaborative hardware interaction by multiple entities using a shared queue

    公开(公告)号:US10331595B2

    公开(公告)日:2019-06-25

    申请号:US14918599

    申请日:2015-10-21

    Abstract: A method for interaction by a central processing unit (CPU) and peripheral devices in a computer includes allocating, in a memory, a work queue for controlling a first peripheral device of the computer. The CPU prepares a work request for insertion in the allocated work queue, the work request specifying an operation for execution by the first peripheral device. A second peripheral device of the computer submits an instruction to the first peripheral device to execute the work request that was prepared by the CPU and thereby to perform the operation specified by the work request.

    Host bus access by add-on devices via a network interface controller

    公开(公告)号:US10152441B2

    公开(公告)日:2018-12-11

    申请号:US15154945

    申请日:2016-05-14

    Abstract: Peripheral apparatus for use with a host computer includes an add-on device, which includes a first network port coupled to one end of a packet communication link and add-on logic, which is configured to receive and transmit packets containing data over the packet communication link and to perform computational operations on the data. A network interface controller (NIC) includes a host bus interface, configured for connection to the host bus of the host computer and a second network port, coupled to the other end of the packet communication link. Packet processing logic in the NIC is coupled between the host bus interface and the second network port, and is configured to translate between the packets transmitted and received over the packet communication link and transactions executed on the host bus so as to provide access between the add-on device and the resources of the host computer.

    Network-based computational accelerator

    公开(公告)号:US10135739B2

    公开(公告)日:2018-11-20

    申请号:US15145983

    申请日:2016-05-04

    Abstract: A data processing device includes a first packet communication interface for communication with at least one host processor via a network interface controller (NIC) and a second packet communication interface for communication with a packet data network. A memory holds a flow state table containing context information with respect to multiple packet flows conveyed between the host processor and the network via the first and second interfaces packet communication interfaces. Acceleration logic, coupled between the first and second packet communication interfaces, performs computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.

    Direct access to local memory in a PCI-E device

    公开(公告)号:US10120832B2

    公开(公告)日:2018-11-06

    申请号:US14721009

    申请日:2015-05-26

    Abstract: A method includes communicating between at least first and second devices over a bus in accordance with a bus address space, including providing direct access over the bus to a local address space of the first device by mapping at least some of the addresses of the local address space to the bus address space. In response to indicating, by the first device or the second device, that the second device requires to access a local address in the local address space that is not currently mapped to the bus address space, the local address is mapped to the bus address space, and the local address is accessed directly, by the second device, using the mapping.

Patent Agency Ranking