MITIGATING POOLED MEMORY CACHE MISS LATENCY WITH CACHE MISS FAULTS AND TRANSACTION ABORTS

    公开(公告)号:US20210318961A1

    公开(公告)日:2021-10-14

    申请号:US17356335

    申请日:2021-06-23

    Abstract: Methods and apparatus for mitigating pooled memory cache miss latency with cache miss faults and transaction aborts. A compute platform coupled to one or more tiers of memory, such as remote pooled memory in a disaggregated environment executes memory transactions to access objects that are stored in the one or more tiers. A determination is made to whether a copy of the object is in a local cache on the platform; if it is, the object is accessed from the local cache. If the object is not in the local cache, a transaction abort may be generated if enabled for the transactions. Optionally, a cache miss page fault is generated if the object is in a cacheable region of a memory tier, and the transaction abort is not enabled. Various mechanisms are provided to determine what to do in response to a cache miss page fault, such as determining addresses for cache lines to prefetch from a memory tier storing the object(s), determining how much data to prefetch, and determining whether to perform a bulk transfer.

    APPLICATION AWARE MEMORY PATROL SCRUBBING TECHNIQUES

    公开(公告)号:US20210318929A1

    公开(公告)日:2021-10-14

    申请号:US17356338

    申请日:2021-06-23

    Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.

    HIGH-AVAILABILITY MEMORY REPLICATION IN ONE OR MORE NETWORK DEVICES

    公开(公告)号:US20210294702A1

    公开(公告)日:2021-09-23

    申请号:US17339164

    申请日:2021-06-04

    Abstract: Examples described herein relate to a switch device. The switch device can perform replication of content stored in a source memory region to two or more memory regions available from two or more nodes, wherein the two or more memory regions available from two or more nodes are identified to the circuitry for use to store replicated content. The two or more nodes can be on different racks than that of a memory device that stores the source memory region. The switch device can select the two or more memory regions available from two or more nodes based, at least, in part on resiliency criteria associated with the two or more nodes.

    RESOURCE SELECTION BASED IN PART ON WORKLOAD

    公开(公告)号:US20210271517A1

    公开(公告)日:2021-09-02

    申请号:US17324525

    申请日:2021-05-19

    Abstract: Examples described herein relate to a system comprising at least one processor and circuitry to: determine multiple configurations of hardware resources to perform a workload associated with a workload request in a subsequent stage based on a pre-processing operation associated with the workload request and at least one service level agreement (SLA) parameter associated with the workload request. In some examples, an executable binary is associated with the workload request and execution of the executable binary performs the pre-processing operation. In some examples, the circuitry is to store the multiple configurations of hardware resources to perform a workload associated with the workload request in a subsequent stage, wherein the multiple configurations of hardware resources are available for access by one or more accelerator devices to perform the workload.

    PRIORITY-BASED BATTERY ALLOCATION FOR RESOURCES DURING POWER OUTAGE

    公开(公告)号:US20210034130A1

    公开(公告)日:2021-02-04

    申请号:US16524868

    申请日:2019-07-29

    Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.

    PERFORMANCE MONITORING FOR SHORT-LIVED FUNCTIONS

    公开(公告)号:US20200241999A1

    公开(公告)日:2020-07-30

    申请号:US16829935

    申请日:2020-03-25

    Abstract: Examples described herein relate to an apparatus that includes a memory and at least one processor where the at least one processor is to receive configuration to gather performance data for a function from one or more platforms and during execution of the function, collect performance data for the function and store the performance data after termination of execution of the function. Some examples include an interface coupled to the at least one processor and the interface is to receive one or more of: an identifier of a function, resources to be tracked as part of function execution, list of devices to be tracked as part of function execution, type of monitoring of function execution, or meta-data to identify when the function is complete. Performance data can be accessed to determine performance of multiple executions of the short-lived function.

    SWITCH-BASED DATA ANONYMIZATION
    79.
    发明申请

    公开(公告)号:US20200213280A1

    公开(公告)日:2020-07-02

    申请号:US16815389

    申请日:2020-03-11

    Abstract: Examples may include a packet processor (such as a switch) including accelerator circuitry such as at least one field programmable gate array (FPGA) or artificial intelligence (AI) core; and a data anonymizer. The data anonymizer is configured to identify a type of a packet received by the packet processor, get a tenant key based at least in part on the packet type or a tenant identifier (ID); decrypt the packet data using the tenant key, provide the decrypted packet data to a selected bitstream programmed into the accelerator circuitry, execute the selected bitstream in the accelerator circuitry to anonymize the packet data, encrypt the anonymized packet data using the tenant key, and transmit the packet including the anonymized packet data according to a mask.

Patent Agency Ranking