TECHNOLOGIES FOR SIDEBAND PERFORMANCE TRACING OF NETWORK TRAFFIC
    1.
    发明申请
    TECHNOLOGIES FOR SIDEBAND PERFORMANCE TRACING OF NETWORK TRAFFIC 审中-公开
    网络流量侧带性能跟踪技术

    公开(公告)号:WO2017112260A1

    公开(公告)日:2017-06-29

    申请号:PCT/US2016/063342

    申请日:2016-11-22

    Abstract: Technologies for tracing network performance include a network computing device configured to receive a network packet from a source endpoint node, process the received network packet, capture trace data corresponding to the network packet as it is processed by the network computing device, and transmit the received network packet to a target endpoint node. The network computing device is further configured to generate a trace data network packet that includes at least a portion of the captured trace data and transmit the trace data network packet to the destination endpoint node. The destination endpoint node is configured to monitor performance of the network by reconstructing a trace of the network packet based on the trace data of the trace data network packet. Other embodiments are described herein.

    Abstract translation: 用于跟踪网络性能的技术包括网络计算设备,该网络计算设备被配置为从源端点节点接收网络分组,处理所接收的网络分组,捕获与由网络分组处理的网络分组相对应的跟踪数据 网络计算设备,并且将接收到的网络分组传输到目标端点节点。 网络计算设备还被配置为生成包括捕获的跟踪数据的至少一部分的跟踪数据网络分组,并将跟踪数据网络分组发送到目的地端点节点。 目标端点节点被配置为通过基于跟踪数据网络分组的跟踪数据重建网络分组的跟踪来监视网络的性能。 这里描述了其他实施例。

    TECHNOLOGIES FOR RECEIVE SIDE MESSAGE INSPECTION AND FILTERING
    2.
    发明申请
    TECHNOLOGIES FOR RECEIVE SIDE MESSAGE INSPECTION AND FILTERING 审中-公开
    接收信息检查和过滤技术

    公开(公告)号:WO2017052975A1

    公开(公告)日:2017-03-30

    申请号:PCT/US2016/048686

    申请日:2016-08-25

    CPC classification number: H04L51/12

    Abstract: Technologies for filtering a received message include a receiving computing device to receive messages and a sender computing device to send messages. The receiving computing device is configured to retrieve a descriptor from a received message and retrieve another descriptor from an inspection entry of a network port entry selected from a network port entry table by the receiving computing device based on the logical network port that received the message. The receiving computing device is further configured to compare the descriptors to determine whether the descriptors match. Upon finding a match, the receiving computing device is still further configured to perform an operation corresponding to the inspection entries whose descriptor matches the descriptor of the message. Other embodiments are described and claimed.

    Abstract translation: 用于过滤接收到的消息的技术包括接收消息的接收计算设备和发送消息的发送者计算设备。 接收计算设备被配置为从接收的消息中检索描述符,并且基于接收到消息的逻辑网络端口从接收计算设备从网络端口条目表中选择的网络端口条目的检查条目中检索另一个描述符。 接收计算设备还被配置为比较描述符以确定描述符是否匹配。 在发现匹配时,接收计算设备还被配置为执行对应于描述符与消息描述符匹配的检查条目的操作。 描述和要求保护其他实施例。

    TECHNOLOGIES FOR DYNAMIC WORK QUEUE MANAGEMENT
    3.
    发明申请
    TECHNOLOGIES FOR DYNAMIC WORK QUEUE MANAGEMENT 审中-公开
    动态工作队伍管理技术

    公开(公告)号:WO2017172216A1

    公开(公告)日:2017-10-05

    申请号:PCT/US2017/020229

    申请日:2017-03-01

    CPC classification number: H04L67/1008

    Abstract: Technologies for dynamic work queue management include a producer computing device communicatively coupled to a consumer computing device. The consumer computing device is configured to transmit a pop request (e.g., a one-sided pull request) that includes consumption constraints indicating an amount of work (e.g., a range of acceptable fraction of work elements to return from a work queue of the producer computing device) to pull from the producer computing device. The producer computing device is configured to determine whether the pop request can be satisfied and generate a response that includes an indication of the result of the determination and one or more producer metrics usable by the consumer computing device to determine a subsequent action to be performed by the consumer computing device upon receipt of the response message. Other embodiments are described and claimed herein.

    Abstract translation: 用于动态工作队列管理的技术包括通信地耦合到消费者计算设备的生产者计算设备。 消费者计算设备被配置为传输包括指示工作量(例如,从生产者的工作队列返回的工作元素的可接受部分的范围)的消耗约束的弹出请求(例如,单侧拉取请求) 计算设备)从生产者计算设备拉出。 生产者计算设备被配置为确定是否可以满足弹出请求并且生成包括确定结果的指示和消费者计算设备可用于确定将由(多个)生成器执行的后续动作的一个或多个生成器度量的响应 消费者计算设备一旦接收到响应消息。 这里描述和要求保护其他实施例。

    TECHNOLOGIES FOR INTEGRATED THREAD SCHEDULING
    4.
    发明申请
    TECHNOLOGIES FOR INTEGRATED THREAD SCHEDULING 审中-公开
    集成线程调度技术

    公开(公告)号:WO2017105573A2

    公开(公告)日:2017-06-22

    申请号:PCT/US2016/052616

    申请日:2016-09-20

    Abstract: Technologies for integrated thread scheduling include a computing device having a network interface controller (NIC). The NIC is configured to detect and suspend a thread that is being blocked by one or more communication operations. A thread scheduling engine of the NIC is configured to move the suspended thread from a running queue of the system thread scheduler to a pending queue of the thread scheduling engine. The thread scheduling engine is further configured to move the suspended thread from the pending queue to a ready queue of the thread scheduling engine upon determining any dependencies and/or blocking communications operations have completed. Other embodiments are described and claimed.

    Abstract translation: 用于集成线程调度的技术包括具有网络接口控制器(NIC)的计算设备。 NIC配置为检测并暂停正在被一个或多个通信操作阻止的线程。 NIC的线程调度引擎被配置为将挂起的线程从系统线程调度器的运行队列移动到线程调度引擎的未决队列。 线程调度引擎还被配置为在确定任何依赖性和/或阻止通信操作已完成时,将挂起的线程从未决队列移动到线程调度引擎的就绪队列。 描述并要求保护其他实施例。

    TECHNOLOGIES FOR AGGREGATION-BASED MESSAGE SYNCHRONIZATION
    5.
    发明申请
    TECHNOLOGIES FOR AGGREGATION-BASED MESSAGE SYNCHRONIZATION 审中-公开
    基于聚合的消息同步技术

    公开(公告)号:WO2017105558A2

    公开(公告)日:2017-06-22

    申请号:PCT/US2016/048162

    申请日:2016-08-23

    CPC classification number: H04L47/62 H04L47/30 H04L47/41 Y02D50/30

    Abstract: Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.

    Abstract translation: 用于基于聚合的消息处理的技术包括通过网络进行通信的多个计算节点。 计算节点接收来自远程计算节点的消息,响应于接收到消息而增加事件计数器,响应于增加计数器来确定是否满足事件触发器,并且如果事件触发器将事件队列写入到事件队列 满意。 计算节点的应用程序监视完成事件的事件队列。 应用可以由计算节点的处理器核心执行,并且其他操作可以由计算节点的主机结构接口执行。 计算节点可以是目标节点并且计数从发起者节点接收到的单侧消息,或者计算节点可以是发起者节点并且计数从目标节点接收的确认消息。 描述并要求保护其他实施例。

    FABRIC-INTEGRATED DATA PULLING ENGINE
    6.
    发明申请
    FABRIC-INTEGRATED DATA PULLING ENGINE 审中-公开
    织物集成数据拉伸引擎

    公开(公告)号:WO2017112346A1

    公开(公告)日:2017-06-29

    申请号:PCT/US2016/063795

    申请日:2016-11-26

    CPC classification number: H04L49/35 G06F15/17331

    Abstract: In an example, there is disclosed a compute node, comprising: first one or more logic elements comprising a data producer engine to produce a datum; and a host fabric interface to communicatively couple the compute node to a fabric, the host fabric interface comprising second one or more logic elements comprising a data pulling engine, the data pulling engine to: publish the datum as available; receive a pull request for the datum, the pull request comprising a node identifier for a data consumer; and send the datum to the data consumer via the fabric. There is also disclosed a method of providing a data pulling engine.

    Abstract translation: 在一个示例中,公开了一种计算节点,包括:第一个或多个逻辑元件,包括数据生成器引擎以生成数据; 以及将所述计算节点通信地耦合到结构的主机结构接口,所述主机结构接口包括第二一个或更多个逻辑元件,所述第二一个或更多个逻辑元件包括数据牵引引擎,所述数据牵引引擎:将所述数据公布为可用; 接收对所述数据的拉取请求,所述拉取请求包括数据消费者的节点标识符; 并通过结构将数据发送给数据使用者。 还公开了一种提供数据牵引引擎的方法。

    TECHNOLOGIES FOR AGGREGATION-BASED MESSAGE SYNCHRONIZATION

    公开(公告)号:WO2017105558A3

    公开(公告)日:2017-06-22

    申请号:PCT/US2016/048162

    申请日:2016-08-23

    Abstract: Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.

    TECHNOLOGIES FOR AUTOMATIC PROCESSOR CORE ASSOCIATION MANAGEMENT AND COMMUNICATION USING DIRECT DATA PLACEMENT IN PRIVATE CACHES
    8.
    发明申请
    TECHNOLOGIES FOR AUTOMATIC PROCESSOR CORE ASSOCIATION MANAGEMENT AND COMMUNICATION USING DIRECT DATA PLACEMENT IN PRIVATE CACHES 审中-公开
    在私有缓存中使用直接数据放置的自动处理器核心协会管理和通信技术

    公开(公告)号:WO2017091257A3

    公开(公告)日:2017-06-01

    申请号:PCT/US2016/048415

    申请日:2016-08-24

    Abstract: Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.

    Abstract translation: 与直接数据放置通信的技术包括通过网络进行通信的多个计算节点。 每个计算节点包括具有维护关联表(AT)的集成主机结构接口(HFI)的多核处理器。 响应于从远程设备接收到消息,HFI确定AT是否包括将消息的一个或多个参数与目的地处理器核相关联的条目。 如果是,则HFI使目标核心的数据传输代理(DTA)接收消息数据。 DTA可能会将消息数据放置在目标内核的专用缓存中。 消息参数可以包括目的地进程标识符或其他网络地址和虚拟存储器地址范围。 HFI可以基于由处理器核执行的软件生成的通信操作来自动更新AT。 描述并要求保护其他实施例。

    TECHNOLOGIES FOR AUTOMATIC PROCESSOR CORE ASSOCIATION MANAGEMENT AND COMMUNICATION USING DIRECT DATA PLACEMENT IN PRIVATE CACHES

    公开(公告)号:WO2017091257A2

    公开(公告)日:2017-06-01

    申请号:PCT/US2016/048415

    申请日:2016-08-24

    CPC classification number: H04L67/2852 G06F9/46

    Abstract: Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.

    TECHNOLOGIES FOR PROXY-BASED MULTI-THREADED MESSAGE PASSING COMMUNICATION
    10.
    发明申请
    TECHNOLOGIES FOR PROXY-BASED MULTI-THREADED MESSAGE PASSING COMMUNICATION 审中-公开
    用于基于代理的多线程消息传递通信技术

    公开(公告)号:WO2016039897A1

    公开(公告)日:2016-03-17

    申请号:PCT/US2015/043936

    申请日:2015-08-06

    Abstract: Technologies for proxy-based multithreaded message passing include a number of computing nodes in communication over a network. Each computing node establishes a number of message passing interface (MPI) endpoints associated with threads executed within a host processes. The threads generate MPI operations that are forwarded to a number of proxy processes. Each proxy process performs the MPI operation using an instance of a system MPI library. The threads may communicate with the proxy processes using a shared-memory communication method. Each thread may be assigned to a particular proxy process. Each proxy process may be assigned dedicated networking resources. MPI operations may include sending or receiving a message, collective operations, and one-sided operations. Other embodiments are described and claimed.

    Abstract translation: 基于代理的多线程消息传递的技术包括通过网络通信的多个计算节点。 每个计算节点建立与主机进程内执行的线程相关联的多个消息传递接口(MPI)端点。 线程生成转发到多个代理进程的MPI操作。 每个代理进程使用系统MPI库的实例执行MPI操作。 线程可以使用共享存储器通信方法与代理进程通信。 每个线程可能被分配给特定的代理进程。 每个代理进程可以被分配专用的网络资源。 MPI操作可能包括发送或接收消息,集体操作和单面操作。 描述和要求保护其他实施例。

Patent Agency Ranking