-
71.
公开(公告)号:US20210318961A1
公开(公告)日:2021-10-14
申请号:US17356335
申请日:2021-06-23
Applicant: Intel Corporation
Inventor: Scott D. PETERSON , Sujoy SEN , Francesc GUIM BERNAT
IPC: G06F12/0842 , G06F12/0862
Abstract: Methods and apparatus for mitigating pooled memory cache miss latency with cache miss faults and transaction aborts. A compute platform coupled to one or more tiers of memory, such as remote pooled memory in a disaggregated environment executes memory transactions to access objects that are stored in the one or more tiers. A determination is made to whether a copy of the object is in a local cache on the platform; if it is, the object is accessed from the local cache. If the object is not in the local cache, a transaction abort may be generated if enabled for the transactions. Optionally, a cache miss page fault is generated if the object is in a cacheable region of a memory tier, and the transaction abort is not enabled. Various mechanisms are provided to determine what to do in response to a cache miss page fault, such as determining addresses for cache lines to prefetch from a memory tier storing the object(s), determining how much data to prefetch, and determining whether to perform a bulk transfer.
-
公开(公告)号:US20210318929A1
公开(公告)日:2021-10-14
申请号:US17356338
申请日:2021-06-23
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Mark A. SCHMISSEUR , Thomas WILLHALM , Marcos E. CARRANZA
Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.
-
公开(公告)号:US20210294702A1
公开(公告)日:2021-09-23
申请号:US17339164
申请日:2021-06-04
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR
IPC: G06F11/14 , G06F15/173 , G06F3/06
Abstract: Examples described herein relate to a switch device. The switch device can perform replication of content stored in a source memory region to two or more memory regions available from two or more nodes, wherein the two or more memory regions available from two or more nodes are identified to the circuitry for use to store replicated content. The two or more nodes can be on different racks than that of a memory device that stores the source memory region. The switch device can select the two or more memory regions available from two or more nodes based, at least, in part on resiliency criteria associated with the two or more nodes.
-
公开(公告)号:US20210271517A1
公开(公告)日:2021-09-02
申请号:US17324525
申请日:2021-05-19
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT
Abstract: Examples described herein relate to a system comprising at least one processor and circuitry to: determine multiple configurations of hardware resources to perform a workload associated with a workload request in a subsequent stage based on a pre-processing operation associated with the workload request and at least one service level agreement (SLA) parameter associated with the workload request. In some examples, an executable binary is associated with the workload request and execution of the executable binary performs the pre-processing operation. In some examples, the circuitry is to store the multiple configurations of hardware resources to perform a workload associated with the workload request in a subsequent stage, wherein the multiple configurations of hardware resources are available for access by one or more accelerator devices to perform the workload.
-
公开(公告)号:US20210209469A1
公开(公告)日:2021-07-08
申请号:US17208861
申请日:2021-03-22
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Kshitij A. DOSHI , Da-Ming CHIANG
Abstract: Examples include techniques to manage training or trained models for deep learning applications. Examples include routing commands to configure a training model to be implemented by a training module or configure a trained model to be implemented by an inference module. The commands routed via out-of-band (OOB) link while training data for the training models or input data for the trained models are routed via inband links.
-
公开(公告)号:US20210034130A1
公开(公告)日:2021-02-04
申请号:US16524868
申请日:2019-07-29
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Karthik KUMAR , Uzair QURESHI , Timothy VERRALL
Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.
-
公开(公告)号:US20210004685A1
公开(公告)日:2021-01-07
申请号:US17025643
申请日:2020-09-18
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Kshitij A. DOSHI , Da-Ming CHIANG
Abstract: Examples include techniques to manage training or trained models for deep learning applications. Examples include routing commands to configure a training model to be implemented by a training module or configure a trained model to be implemented by an inference module. The commands routed via out-of-band (OOB) link while training data for the training models or input data for the trained models are routed via inband links.
-
公开(公告)号:US20200241999A1
公开(公告)日:2020-07-30
申请号:US16829935
申请日:2020-03-25
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Steven BRISCOE , Karthik KUMAR , Alexander BACHMUTSKY , Timothy VERRALL
IPC: G06F11/34
Abstract: Examples described herein relate to an apparatus that includes a memory and at least one processor where the at least one processor is to receive configuration to gather performance data for a function from one or more platforms and during execution of the function, collect performance data for the function and store the performance data after termination of execution of the function. Some examples include an interface coupled to the at least one processor and the interface is to receive one or more of: an identifier of a function, resources to be tracked as part of function execution, list of devices to be tracked as part of function execution, type of monitoring of function execution, or meta-data to identify when the function is complete. Performance data can be accessed to determine performance of multiple executions of the short-lived function.
-
公开(公告)号:US20200213280A1
公开(公告)日:2020-07-02
申请号:US16815389
申请日:2020-03-11
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Alexander BACHMUTSKY
Abstract: Examples may include a packet processor (such as a switch) including accelerator circuitry such as at least one field programmable gate array (FPGA) or artificial intelligence (AI) core; and a data anonymizer. The data anonymizer is configured to identify a type of a packet received by the packet processor, get a tenant key based at least in part on the packet type or a tenant identifier (ID); decrypt the packet data using the tenant key, provide the decrypted packet data to a selected bitstream programmed into the accelerator circuitry, execute the selected bitstream in the accelerator circuitry to anonymize the packet data, encrypt the anonymized packet data using the tenant key, and transmit the packet including the anonymized packet data according to a mask.
-
公开(公告)号:US20200004558A1
公开(公告)日:2020-01-02
申请号:US16465386
申请日:2017-06-28
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Kshitij A. DOSHI , Daniel RIVAS BARRAGAN , Alejandro DURAN GONZALEZ
Abstract: Examples may include techniques for collective operations in a distributed architecture. A collective operation request message from a computing node causes collective operations at one or more target computing nodes communicatively coupled with the computing node through a network switch. The collective operation request message also causes the network switch to perform collective operations on collective operation results received from the one or more target computing nodes.
-
-
-
-
-
-
-
-
-