-
公开(公告)号:US20190042488A1
公开(公告)日:2019-02-07
申请号:US15857337
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: FRANCESC GUIM BERNAT , MARK A. SCHMISSEUR , KARTHIK KUMAR , THOMAS WILLHALM
Abstract: Technology for a memory controller is described. The memory controller can receive a request from a data consumer node in a data center for training data. The training data indicated in the request can correspond to a model identifier (ID) of a model that runs on the data consumer node. The memory controller can identify a data provider node in the data center that stores the training data that is requested by the data consumer node. The data provider node can be identified using a tracking table that is maintained at the memory controller. The memory controller can send an instruction to the data provider node that instructs the data provider node to send the training data to the data consumer node to enable training of the model that runs on the data consumer node.
-
公开(公告)号:US20190034763A1
公开(公告)日:2019-01-31
申请号:US15855891
申请日:2017-12-27
Applicant: Intel Corporation
Inventor: FRANCESC GUIM BERNAT , KARTHIK KUMAR , MARK A. SCHMISSEUR , THOMAS WILLHALM
Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
-
公开(公告)号:US20200285420A1
公开(公告)日:2020-09-10
申请号:US16882833
申请日:2020-05-26
Applicant: Intel Corporation
Inventor: FRANCESC GUIM BERNAT , KARTHIK KUMAR , DONALD FAW , THOMAS WILLHALM
Abstract: In one embodiment, an apparatus includes: a first queue to store requests that are guaranteed to be delivered to a persistent memory; a second queue to store requests that are not guaranteed to be delivered to the persistent memory; a control circuit to receive the requests and to direct the requests to the first queue or the second queue; and an egress circuit coupled to the first queue to deliver the requests stored in the first queue to the persistent memory even when a power failure occurs. Other embodiments are described and claimed.
-
公开(公告)号:US20190042138A1
公开(公告)日:2019-02-07
申请号:US15921346
申请日:2018-03-14
Applicant: INTEL CORPORATION
Inventor: FRANCESC GUIM BERNAT , KARTHIK KUMAR , THOMAS WILLHALM , MARK A. SCHMISSEUR
Abstract: Devices and systems for distributing data across disaggregated memory resources is disclosed and described. An acceleration controller device can include an adaptive data migration engine (ADME) configured to communicatively couple to a fabric interconnect, and is further configured to monitor application data performance metrics at the plurality of disaggregated memory pools for a plurality of applications executing on the plurality of compute resources, select a current application having a current application data performance metric, determine an alternate memory pool from the plurality of disaggregated memory pools estimated to increase application data performance relative to the current application data performance metric, and migrate the data from the current memory pool to the alternate memory pool.
-
公开(公告)号:US20190034829A1
公开(公告)日:2019-01-31
申请号:US15857313
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: FRANCESC GUIM BERNAT , MARK A. SCHMISSEUR , KARTHIK KUMAR , THOMAS WILLHALM
Abstract: Technology for a data filter device operable to filter training data is described. The data filter device can receive training data from a data provider. The training data can be provided with corresponding metadata that indicates a model stored in a data store that is associated with the training data. The data filter device can identify a filter that is associated with the model stored in the data store. The data filter device can apply the filter to the training data received from the data provider to obtain filtered training data. The data filter device can provide the filtered training data to the model stored in the data store, wherein the filtered training data is used to train the model.
-
公开(公告)号:US20180373671A1
公开(公告)日:2018-12-27
申请号:US15634128
申请日:2017-06-27
Applicant: INTEL CORPORATION
Inventor: FRANCESC GUIM BERNAT , KARTHIK KUMAR , NICOLAE POPOVICI , THOMAS WILLHALM
IPC: G06F15/173 , G06F12/0831 , G06F9/52 , G06F12/0813
Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a transaction request to perform a transaction with the memory, the transaction request including a synchronization indication to indicate utilization of transaction synchronization to perform the transaction. Embodiments may include sending a request to a caching agent to perform the transaction, receiving a response from the caching agent, the response to indicate whether the transaction conflicts or does not conflict with another transaction, and performing the transaction if the response indicates the transaction does not conflict with the other transaction, or delaying the transaction for a period of time if the response indicates the transaction does conflict with the other transaction
-
7.
公开(公告)号:US20170123980A1
公开(公告)日:2017-05-04
申请号:US15408324
申请日:2017-01-17
Applicant: Intel Corporation
Inventor: THOMAS WILLHALM
IPC: G06F12/0804 , G06F12/0815 , G06F12/0875
CPC classification number: G06F12/0804 , G06F9/467 , G06F12/0811 , G06F12/0815 , G06F12/0875 , G06F2212/202 , G06F2212/452 , G06F2212/60 , G06F2212/621
Abstract: A processor in described having an interface to non-volatile random access memory and logic circuitry. The logic circuitry is to identify cache lines modified by a transaction which views the non-volatile random access memory as the transaction's persistence storage. The logic circuitry is also to identify cache lines modified by a software process other than a transaction that also views said non-volatile random access memory as persistence storage.
-
公开(公告)号:US20190102315A1
公开(公告)日:2019-04-04
申请号:US15719618
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: FRANCESC GUIM BERNAT , KARTHIK KUMAR , MARK SCHMISSEUR , THOMAS WILLHALM
IPC: G06F12/10
CPC classification number: G06F12/10 , G06F9/3004 , G06F12/02 , G06F12/0207 , G06F12/0223 , G06F2212/1016 , G06F2212/657
Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
-
9.
公开(公告)号:US20190065200A1
公开(公告)日:2019-02-28
申请号:US16051316
申请日:2018-07-31
Applicant: INTEL CORPORATION
Inventor: ELMOUSTAPHA OULD-AHMED-VALL , THOMAS WILLHALM , TRACY GARRETT DRYSDALE
IPC: G06F9/30
CPC classification number: G06F9/30145 , G06F9/3001 , G06F9/30014 , G06F9/30018 , G06F9/30036 , G06F9/30105 , G06F9/30109 , G06F9/30112 , G06F9/3013 , H04N19/42
Abstract: Systems, apparatuses, and methods for performing delta decoding on packed data elements of a source and storing the results in packed data elements of a destination using a single packed delta decode instruction are described. A processor may include a decoder to decode an instruction, and execution unit to execute the decoded instruction to calculate for each packed data element position of a source operand, other than a first packed data element position, a value that comprises a packed data element of that packed data element position and all packed data elements of packed data element positions that are of lesser significance, store a first packed data element from the first packed data element position of the source operand into a corresponding first packed data element position of a destination operand, and for each calculated value, store the value into a corresponding packed data element position of the destination operand.
-
10.
公开(公告)号:US20170139715A1
公开(公告)日:2017-05-18
申请号:US15419888
申请日:2017-01-30
Applicant: INTEL CORPORATION
Inventor: ELMOUSTAPHA OULD-AHMED-VALL , THOMAS WILLHALM , TRACY GARRETT DRYSDALE
IPC: G06F9/30
CPC classification number: G06F9/30145 , G06F9/3001 , G06F9/30014 , G06F9/30018 , G06F9/30036 , G06F9/30105 , G06F9/30109 , G06F9/30112 , G06F9/3013 , H04N19/42
Abstract: Systems, apparatuses, and methods for performing delta decoding on packed data elements of a source and storing the results in packed data elements of a destination using a single packed delta decode instruction are described. A processor may include a decoder to decode an instruction, and execution unit to execute the decoded instruction to calculate for each packed data element position of a source operand, other than a first packed data element position, a value that comprises a packed data element of that packed data element position and all packed data elements of packed data element positions that are of lesser significance, store a first packed data element from the first packed data element position of the source operand into a corresponding first packed data element position of a destination operand, and for each calculated value, store the value into a corresponding packed data element position of the destination operand.
-
-
-
-
-
-
-
-
-