Caching policy in a multicore system on a chip (SOC)

    公开(公告)号:US10789175B2

    公开(公告)日:2020-09-29

    申请号:US15610823

    申请日:2017-06-01

    Abstract: A computing system comprises one or more cores. Each core comprises a processor and switch with each processor coupled to a communication network among the cores. Also disclosed are techniques for implementing an adaptive last level allocation policy in a last level cache in a multicore system receiving one or more new blocks for allocating for storage in the cache, accessing a selected profile from plural profiles that define allocation actions, according to a least recently used type of allocation and based on a cache action, a state bit, and traffic pattern type for the new blocks of data and handling the new block according to the selected profile for a selected least recently used (LRU) position in the cache.

    NIC with switching functionality between network ports

    公开(公告)号:US10454991B2

    公开(公告)日:2019-10-22

    申请号:US14658260

    申请日:2015-03-16

    Abstract: A network interface device includes a host interface for connection to a host processor and a network interface, which is configured to transmit and receive data packets over a network, and which comprises multiple distinct physical ports configured for connection to the network. Processing circuitry is configured to receive, via one of the physical ports, a data packet from the network and to decide, responsively to a destination identifier in the packet, whether to deliver a payload of the data packet to the host processor via the host interface or to forward the data packet to the network via another one of the physical ports.

    Host bus access by add-on devices via a network interface controller
    25.
    发明申请
    Host bus access by add-on devices via a network interface controller 审中-公开
    通过网络接口控制器通过附加设备访问主机总线

    公开(公告)号:US20160342547A1

    公开(公告)日:2016-11-24

    申请号:US15154945

    申请日:2016-05-14

    Abstract: Peripheral apparatus for use with a host computer includes an add-on device, which includes a first network port coupled to one end of a packet communication link and add-on logic, which is configured to receive and transmit packets containing data over the packet communication link and to perform computational operations on the data. A network interface controller (NIC) includes a host bus interface, configured for connection to the host bus of the host computer and a second network port, coupled to the other end of the packet communication link. Packet processing logic in the NIC is coupled between the host bus interface and the second network port, and is configured to translate between the packets transmitted and received over the packet communication link and transactions executed on the host bus so as to provide access between the add-on device and the resources of the host computer.

    Abstract translation: 用于主计算机的外围设备包括附加设备,其包括耦合到分组通信链路的一端的第一网络端口和附加逻辑,其被配置为通过分组通信来接收和发送包含数据的分组 链接并对数据执行计算操作。 网络接口控制器(NIC)包括主机总线接口,被配置为连接到主计算机的主机总线和耦合到分组通信链路的另一端的第二网络端口。 NIC中的分组处理逻辑耦合在主机总线接口和第二网络端口之间,并且被配置为在通过分组通信链路发送和接收的分组之间转换和在主机总线上执行的事务之间的转换,以便在加法 - 设备和主机的资源。

    Efficient transport flow processing on an accelerator
    26.
    发明申请
    Efficient transport flow processing on an accelerator 审中-公开
    加速器上高效的运输流程处理

    公开(公告)号:US20160330301A1

    公开(公告)日:2016-11-10

    申请号:US15146013

    申请日:2016-05-04

    Abstract: Data processing apparatus includes a host processor and a network interface controller (NIC), which is configured to couple the host processor to a packet data network. A memory holds a flow state table containing context information with respect to computational operations to be performed on multiple packet flows conveyed between the host processor and the network. Acceleration logic is coupled to perform the computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.

    Abstract translation: 数据处理装置包括主处理器和网络接口控制器(NIC),其被配置为将主机处理器耦合到分组数据网络。 存储器保存流状态表,该流状态表包含关于在主处理器和网络之间传送的多个分组流执行的计算操作的上下文信息。 加速逻辑被耦合以使用流状态表中的上下文信息对多个分组流中的分组的有效载荷执行计算操作。

    ADDRESS TRANSLATION SERVICES FOR DIRECT ACCESSING OF LOCAL MEMORY OVER A NETWORK FABRIC
    27.
    发明申请
    ADDRESS TRANSLATION SERVICES FOR DIRECT ACCESSING OF LOCAL MEMORY OVER A NETWORK FABRIC 有权
    地址翻译服务,用于直接访问网络织物上的本地记忆

    公开(公告)号:US20160077976A1

    公开(公告)日:2016-03-17

    申请号:US14953462

    申请日:2015-11-30

    Abstract: A method in a system that includes first and second devices that communicate with one another over a fabric that operates in accordance with a fabric address space, and in which the second device accesses a local memory via a local connection and not over the fabric, includes sending from the first device to a translation agent (TA) a translation request that specifies an untranslated address in an address space according to which the first device operates, for directly accessing the local memory of the second device. A translation response that specifies a respective translated address in the fabric address space, which the first device is to use instead of the untranslated address is received by the first device. The local memory of the second device is directly accessed by the first device over the fabric by converting the untranslated address to the translated address.

    Abstract translation: 一种系统中的方法,包括通过结构上的彼此通信的第一和第二设备,所述结构根据结构地址空间操作,并且其中所述第二设备经由本地连接而不是所述结构访问本地存储器,包括 从第一设备向翻译代理(TA)发送指定第一设备所运行的地址空间中的非翻译地址的翻译请求,用于直接访问第二设备的本地存储器。 指定第一设备将使用而不是非翻译地址的结构地址空间中的相应翻译地址的翻译响应由第一设备接收。 第二设备的本地存储器通过将非翻译地址转换为转换的地址,由第一设备直接通过结构访问。

    Efficient management of network traffic in a multi-CPU server

    公开(公告)号:US10164905B2

    公开(公告)日:2018-12-25

    申请号:US14608265

    申请日:2015-01-29

    Abstract: A Network Interface Controller (NIC) includes a network interface, a peer interface and steering logic. The network interface is configured to receive incoming packets from a communication network. The peer interface is configured to communicate with a peer NIC not via the communication network. The steering logic is configured to classify the packets received over the network interface into first incoming packets that are destined to a local Central Processing Unit (CPU) served by the NIC, and second incoming packets that are destined to a remote CPU served by the peer NIC, to forward the first incoming packets to the local CPU, and to forward the second incoming packets to the peer NIC over the peer interface not via the communication network.

Patent Agency Ranking