ADDRESS TRANSLATION TECHNOLOGIES
    1.
    发明申请

    公开(公告)号:US20210149821A1

    公开(公告)日:2021-05-20

    申请号:US17133503

    申请日:2020-12-23

    Abstract: Examples described herein relate to an apparatus comprising: at least one processor, when operational, to: perform a command to submit a work descriptor to a device, wherein: submission of the work descriptor causes an attempt to perform a substitution of an address in the work descriptor before submitting the work descriptor to the device. In some examples, the address comprises a guest virtual address (GVA) and the substitution of an address comprises replacement of the GVA with a host physical address (HPA) corresponding to the GVA. In some examples, the at least one processor is to: substitute the address in the work descriptor with an address translation of the address in the work descriptor if the address translation is available for access by a processor that performs the command.

    GENERATING, AT LEAST IN PART, AND/OR RECEIVING, AT LEAST IN PART, AT LEAST ONE REQUEST

    公开(公告)号:US20210099398A1

    公开(公告)日:2021-04-01

    申请号:US17067564

    申请日:2020-10-09

    Abstract: In an embodiment, an apparatus is provided that may include circuitry to generate, at least in part, and/or receive, at least in part, at least one request that at least one network node generate, at least in part, information. The information may be to permit selection, at least in part, of (1) at least one power consumption state of the at least one network node, and (2) at least one time period. The at least one time period may be to elapse, after receipt by at least one other network node of at least one packet, prior to requesting at least one change in the at least one power consumption state. The at least one packet may be to be transmitted to the at least one network node. Of course, many alternatives, modifications, and variations are possible without departing from this embodiment.

    EFFICIENTLY MERGING NON-IDENTICAL PAGES IN KERNEL SAME-PAGE MERGING (KSM) FOR EFFICIENT AND IMPROVED MEMORY DEDUPLICATION AND SECURITY

    公开(公告)号:US20240004797A1

    公开(公告)日:2024-01-04

    申请号:US18369090

    申请日:2023-09-15

    CPC classification number: G06F12/0882 G06F12/0842

    Abstract: Methods and apparatus for efficiently merging non-identical pages in Kernel Same-page Merging (KSM) for efficient and improved memory deduplication and security. The methods and apparatus identify memory pages with similar data and selectively merge those pages based on criteria such as a threshold. Memory pages in memory for a computing platform are scanned to identify pages storing similar but not identical data. A delta record between the similar memory pages is created, and it is determined whether a size of the delta (i.e., amount of content that is different) is less than a threshold. If so, the delta record is used to merge the pages. In one aspect, operations for creating delta records and merging the content of memory pages using delta records is offloaded from a platform's CPU. Support for memory reads and memory writes are provided utilizing delta records, including merging and unmerging pages under applicable conditions.

    DATA CONSISTENCY AND DURABILITY OVER DISTRIBUTED PERSISTENT MEMORY SYSTEMS

    公开(公告)号:US20200371914A1

    公开(公告)日:2020-11-26

    申请号:US16986094

    申请日:2020-08-05

    Abstract: Examples described herein relates to a network interface apparatus that includes packet processing circuitry and a bus interface. In some examples, the packet processing circuitry to: process a received packet that includes data, a request to perform a write operation to write the data to a cache, and an indicator that the data is to be durable and based at least on the received packet including the request and the indicator, cause the data to be written to the cache and non-volatile memory. In some examples, the packet processing circuitry is to issue a command to an input output (IO) controller to cause the IO controller to write the data to the cache and the non-volatile memory. In some examples, the cache comprises one or more of: a level-0 (L0), level-1 (L1), level-2 (L2), or last level cache (LLC) and the non-volatile memory comprises one or more of: volatile memory that is part of an Asynchronous DRAM Refresh (ADR) domain, persistent memory, battery-backed memory, or memory device whose state is determinate even if power is interrupted to the memory device. In some examples, based on receipt of a second received packet that includes a request to persist data, the packet processing circuitry is to request that data stored in a memory buffer be copied to the non-volatile memory.

    DATA PLANE SEMANTICS FOR SOFTWARE VIRTUAL SWITCHES

    公开(公告)号:US20200097269A1

    公开(公告)日:2020-03-26

    申请号:US16142401

    申请日:2018-09-26

    Abstract: Examples may include a method of compiling a declarative language program for a virtual switch. The method includes parsing the declarative language program, the program defining a plurality of match-action tables (MATs), translating the plurality of MATs into intermediate code, and parsing a core identifier (ID) assigned to each one of the plurality of MATs. When the core IDs of the plurality of MATs are the same, the method includes connecting intermediate code of the plurality of MATs using function calls, and translating the intermediate code of the plurality of MATs into machine code to be executed by a core identified by the core IDs.

    TECHNOLOGIES FOR PROVIDING INFORMATION TO A USER WHILE TRAVELING
    7.
    发明申请
    TECHNOLOGIES FOR PROVIDING INFORMATION TO A USER WHILE TRAVELING 审中-公开
    向旅客提供信息的技术

    公开(公告)号:US20160282129A1

    公开(公告)日:2016-09-29

    申请号:US14368350

    申请日:2013-12-19

    Abstract: Technologies for providing information to a user while traveling include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data. Other embodiments are described and claimed.

    Abstract translation: 在旅行时向用户提供信息的技术包括移动计算设备以确定与路线段相关联的网络状况信息。 路线段可以是定义从起始位置到目的地的至少一条路线的多个路段中的一个。 移动计算设备可以基于网络条件信息来确定从起始位置到目的地的路由。 移动计算设备可以将网络条件信息上传到众包服务器。 移动计算设备可以基于设备上下文来预测设备的未来位置,确定预测位置的安全级别,并且如果安全级别低于阈值安全级别则通知用户。 设备上下文可以包括位置,时间和其他数据。 可以基于预定义的犯罪数据来确定安全级别。 描述和要求保护其他实施例。

    PACKET PROCESSING LOAD BALANCER
    8.
    发明申请

    公开(公告)号:US20230082780A1

    公开(公告)日:2023-03-16

    申请号:US17471889

    申请日:2021-09-10

    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.

    METHOD AND APPARATUS FOR SCHEDULING ACCESS TO MULTIPLE ACCELERATORS

    公开(公告)号:US20240403107A1

    公开(公告)日:2024-12-05

    申请号:US18795445

    申请日:2024-08-06

    Inventor: Ren WANG Yifan YUAN

    Abstract: Methods, apparatus, and computer programs are disclosed to schedule access to multiple accelerators. In one embodiment, a method is disclosed to perform: receiving a first request to process data for a first application by a first accelerator of a plurality of accelerators of a computing system, an accelerator of the plurality of accelerators being dedicated to one or more respective specialized computations of the computing system for data processing; scheduling resources for the first request based on the first request and a second request to process data for a second application by a second accelerator of the plurality of accelerators, the first and second requests having one or more priority indications indicating priority between the first and second requests; and processing the data for the first application using the resources as scheduled responsive to the first request.

Patent Agency Ranking