METHOD AND APPARATUS TO SECURELY MEASURE QUALITY OF SERVICE END TO END IN A NETWORK

    公开(公告)号:US20170093677A1

    公开(公告)日:2017-03-30

    申请号:US14865136

    申请日:2015-09-25

    CPC classification number: H04L43/12 H04L43/04 H04L43/087 H04L43/106 H04L49/70

    Abstract: Methods and apparatus to securely measure quality of service end to end in a network. First and second endpoints are configured to detect packets marked for QoS measurements, associate a timestamp using a secure clock with such marked packets, and report the timestamp along with packet identifying metadata to an external monitor. The external monitor uses the packet identifying metadata to match up timestamps and calculates a QoS measurement corresponding to the latency incurred by the packet when traversing a packet-processing path between the first and second endpoints. The endpoints may be implemented in physical devices, such as Ethernet controllers and physical switches, as well as virtual, software-defined components including virtual switches.

    LOADING DATA USING SUB-THREAD INFORMATION IN A PROCESSOR
    72.
    发明申请
    LOADING DATA USING SUB-THREAD INFORMATION IN A PROCESSOR 审中-公开
    在处理器中使用子螺纹信息加载数据

    公开(公告)号:US20170039144A1

    公开(公告)日:2017-02-09

    申请号:US14820802

    申请日:2015-08-07

    Abstract: In one embodiment, a processor includes a core to execute instructions, a cache memory coupled to the core, and a cache controller coupled to the cache memory. The cache controller, responsive to a first load request having a first priority level, is to insert data of the first load request into a first entry of the cache memory and set an age indicator of a metadata field of the first entry to a first age level, the first age level greater than a default age level of a cache insertion policy for load requests, and responsive to a second load request having a second priority level to insert data of the second load request into a second entry of the cache memory and to set an age indicator of a metadata field of the second entry to the default age level, the first and second load requests of a first thread. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括执行指令的核心,耦合到核心的高速缓冲存储器以及耦合到高速缓存存储器的高速缓存控制器。 高速缓存控制器响应于具有第一优先级的第一加载请求,将第一加载请求的数据插入到高速缓冲存储器的第一条目中,并将第一条目的元数据字段的年龄指示符设置为第一年龄 级别,所述第一年龄级别大于用于加载请求的高速缓存插入策略的默认年龄级别,并响应于具有第二优先级的第二加载请求,以将所述第二加载请求的数据插入所述高速缓存存储器的第二条目;以及 将第二条目的元数据字段的年龄指示符设置为默认年龄级别,第一线程的第一和第二加载请求。 描述和要求保护其他实施例。

    Channel aware job scheduling
    74.
    发明授权
    Channel aware job scheduling 有权
    信道感知作业调度

    公开(公告)号:US09253793B2

    公开(公告)日:2016-02-02

    申请号:US13720169

    申请日:2012-12-19

    CPC classification number: H04W72/1226

    Abstract: Methods and systems may provide for determining quality of service (QoS) information for a job associated with an application, and determining a condition prediction for a wireless channel of a mobile platform. Additionally, the job may be scheduled for communication over the wireless channel based at least in part on the QoS information and the condition prediction. In one example, scheduling the job includes imposing a delay in the communication if the condition prediction indicates that a throughput of the wireless channel is below a threshold and the delay complies with a latency constraint of the QoS information.

    Abstract translation: 方法和系统可以提供用于确定与应用相关联的作业的服务质量(QoS)信息,以及确定移动平台的无线信道的条件预测。 另外,可以至少部分地基于QoS信息和条件预测来调度该作业通过无线信道进行通信。 在一个示例中,如果条件预测指示无线信道的吞吐量低于阈值并且延迟符合QoS信息的等待时间约束,则调度该作业包括在通信中施加延迟。

    TECHNOLOGIES FOR DYNAMIC BATCH SIZE MANAGEMENT

    公开(公告)号:US20230379271A1

    公开(公告)日:2023-11-23

    申请号:US18228420

    申请日:2023-07-31

    CPC classification number: H04L49/9068 H04L47/365 H04L49/9005

    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.

    Offload of data lookup operations
    78.
    发明授权

    公开(公告)号:US11698929B2

    公开(公告)日:2023-07-11

    申请号:US16207065

    申请日:2018-11-30

    CPC classification number: G06F16/9017 G06F16/906 G06F16/90335

    Abstract: A central processing unit can offload table lookup or tree traversal to an offload engine. The offload engine can provide hardware accelerated operations such as instruction queueing, bit masking, hashing functions, data comparisons, a results queue, and a progress tracking. The offload engine can be associated with a last level cache. In the case of a hash table lookup, the offload engine can apply a hashing function to a key to generate a signature, apply a comparator to compare signatures against the generated signature, retrieve a key associated with the signature, and apply the comparator to compare the key against the retrieved key. Accordingly, a data pointer associated with the key can be provided in the result queue. Acceleration of operations in tree traversal and tuple search can also occur.

    HARDWARE ASSISTED EFFICIENT MEMORY MANAGEMENT FOR DISTRIBUTED APPLICATIONS WITH REMOTE MEMORY ACCESSES

    公开(公告)号:US20230114263A1

    公开(公告)日:2023-04-13

    申请号:US18065241

    申请日:2022-12-13

    Abstract: Systems, apparatuses and methods may provide for technology that uses centralized hardware to detect a local allocation request associated with a local thread, detect a remote allocation request associated with a remote thread, wherein the remote allocation request bypasses a remote operating system, and process the local allocation request and the remote allocation request with respect to central heap, wherein the central heap is shared by the local thread and the remote thread. The local allocation request and the remote allocation request may include one or more of a first request to allocate a memory block of a specified size, a second request to allocate multiple memory blocks of a same size, a third request to resize a previously allocated memory block, or a fourth request to deallocate the previously allocated memory block.

    Technologies for flow rule aware exact match cache compression

    公开(公告)号:US11201940B2

    公开(公告)日:2021-12-14

    申请号:US15862311

    申请日:2018-01-04

    Abstract: Technologies for flow rule aware exact match cache compression include multiple computing devices in communication over a network. A computing device reads a network packet from a network port and extracts one or more key fields from the packet to generate a lookup key. The key fields are identified by a key field specification of an exact match flow cache. The computing device may dynamically configure the key field specification based on an active flow rule set. The computing device may compress the key field specification to match a union of non-wildcard fields of the active flow rule set. The computing device may expand the key field specification in response to insertion of a new flow rule. The computing device looks up the lookup key in the exact match flow cache and, if a match is found, applies the corresponding action. Other embodiments are described and claimed.

Patent Agency Ranking