-
公开(公告)号:US20190102403A1
公开(公告)日:2019-04-04
申请号:US15719639
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: Mark A. Schmisseur , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar
IPC: G06F17/30
Abstract: Techniques and apparatus for providing access to data in a plurality of storage formats are described. In one embodiment, for example, an apparatus may include logic, at least a portion of comprised in hardware coupled to the at least one memory, to determine a first storage format of a database operation on a database having a second storage format, and perform a format conversion process responsive to the first storage format being different than the second storage format, the format conversion process to translate a virtual address of the database operation to a physical address, and determine a converted physical address comprising a memory address according to the first storage format. Other embodiments are described and claimed.
-
公开(公告)号:US20190102147A1
公开(公告)日:2019-04-04
申请号:US15719853
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: Karthik Kumar , Francesc Guim Bernat , Thomas Willhalm , Mark A. Schmisseur
Abstract: Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds.
-
公开(公告)号:US20190095329A1
公开(公告)日:2019-03-28
申请号:US15717825
申请日:2017-09-27
Applicant: Intel Corporation
Inventor: Karthik Kumar , Benjamin A. Graniello
IPC: G06F12/0831 , G06F12/02 , G06F12/1009
Abstract: Technology for a system operable to allocate physical pages of memory is described. The system can include a memory side cache, a memory side cache monitoring unit coupled to the memory side cache, and an operating system (OS) page allocator. The OS page allocator can receive feedback from the memory side cache monitoring unit. The OS page allocator can adjust a page allocation policy that defines the physical pages allocated by the OS page allocator based on the feedback received from the memory side cache monitoring unit.
-
公开(公告)号:US20180351836A1
公开(公告)日:2018-12-06
申请号:US15613944
申请日:2017-06-05
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Daniel Rivas Barragan , Rahul Khanna
Abstract: Particular embodiments described herein provide for a network element that can be configured to receive a request related to one or more disaggregated resources, link the one or more disaggregated resources to a local counter, receive performance related data from each of the one or more disaggregated resources, and store the performance related data in the local counter. In an example, the one or more disaggregated resources comprise a software defined infrastructure composite node
-
85.
公开(公告)号:US20180285009A1
公开(公告)日:2018-10-04
申请号:US15474005
申请日:2017-03-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , John Chun Kwok Leung , Mark Schmisseur , Thomas Willhalm
IPC: G06F3/06
CPC classification number: G06F3/0631 , G06F3/0604 , G06F3/0673 , G06F9/5044 , G06F9/5061
Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
-
86.
公开(公告)号:US20180165196A1
公开(公告)日:2018-06-14
申请号:US15375675
申请日:2016-12-12
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mohan J. Kumar , Thomas Willhalm , Robert G. Blankenship
IPC: G06F12/0808 , G06F13/16 , G06F3/06 , G06F12/128
CPC classification number: G06F12/0808 , G06F12/0831 , G06F12/0868 , G06F12/12 , G06F12/128 , G06F13/1663 , G06F2212/1024 , G06F2212/621
Abstract: Embodiments provide for a processor including a cache a caching agent and a processing node to decode an instruction including at least one operand specifying an address range within a distributed shared memory (DSM) and perform a flush to a first of a plurality of memory devices in the DSM at the specified address range.
-
公开(公告)号:US09983996B2
公开(公告)日:2018-05-29
申请号:US14965487
申请日:2015-12-10
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Brian Slechta
IPC: G06F12/123 , G06F12/0831 , G06F12/128 , G06F12/084 , G06F12/12
CPC classification number: G06F12/0833 , G06F12/0813 , G06F12/0815 , G06F12/084 , G06F12/12 , G06F12/123 , G06F12/128 , G06F2212/1021 , G06F2212/2542 , G06F2212/314 , G06F2212/621
Abstract: Technologies for managing cache memory of a processor in a distributed shared memory system includes managing a distance value and an age value associated with each cache line of the cache memory. The distance value is indicative of a distance of a memory resource, relative to the processor, from which data stored in the corresponding chance line originates. The age value is based on the distance value and the number of times for which the corresponding cache line has been considered for eviction since a previous eviction of the corresponding cache line. Initially, the age value is set to the distance value. Additionally, every time a cache line is accessed, the age value associated with the accessed cache line is reset to the corresponding distance value. During a cache eviction operation, the cache line for eviction is selected based on the age value associated with each cache line. The age values of cache lines not selected for eviction are subsequently decremented such that even cache lines associated with remote memory resources will eventually be considered for eviction if not recently accessed.
-
88.
公开(公告)号:US20180101482A1
公开(公告)日:2018-04-12
申请号:US15784625
申请日:2017-10-16
Applicant: Intel Corporation
Inventor: Karthik Kumar , Martin P. Dimitrov , Thomas Willhalm
IPC: G06F12/1027 , G06F12/0862
CPC classification number: G06F12/1027 , G06F9/00 , G06F12/0862 , G06F13/16 , G06F2212/1024 , G06F2212/221
Abstract: A processor or system may include a memory controller to store, in a pre-allocated portion of bit-addressable, random access persistent memory (PM), a relationship between a group of addresses being stored in the PM according to a set of instructions when executed. The memory controller is further to retrieve the relationship when accessing an address from the groups of addresses.
-
公开(公告)号:US20180077270A1
公开(公告)日:2018-03-15
申请号:US15260613
申请日:2016-09-09
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Daniel Rivas Barragan
CPC classification number: H04L69/324 , G06F15/173 , H04L1/1642 , H04L12/50 , H04L49/00
Abstract: Technologies for using fabric supported sequencers in fabric architectures includes a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to receive an sequencer access message from one of the plurality of computing nodes that includes an identifier of a sequencing counter corresponding to a sequencer session and one or more operation parameters. The network switch is additionally configured to perform an operation on a value associated with the identifier of the sequencing counter as a function of the one or more operation parameters, increment the identifier of the sequencing counter, and associate a result of the operation with the incremented identifier of the sequencing counter. The network switch is further configured to transmit an acknowledgment of successful access to the computing node that includes the result of the operation and the incremented identifier of the sequencing counter. Other embodiments are described herein.
-
公开(公告)号:US20180027062A1
公开(公告)日:2018-01-25
申请号:US15638855
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Susanne M. Balle , Rahul Khanna , Sujoy Sen , Karthik Kumar
IPC: H04L29/08 , H04L12/26 , H04L12/919
CPC classification number: H04Q11/0005 , B25J15/0014 , B65G1/0492 , G02B6/3882 , G02B6/3893 , G02B6/3897 , G02B6/4292 , G02B6/4452 , G05D23/1921 , G05D23/2039 , G06F1/183 , G06F3/061 , G06F3/0611 , G06F3/0613 , G06F3/0616 , G06F3/0619 , G06F3/0625 , G06F3/0631 , G06F3/0638 , G06F3/064 , G06F3/0647 , G06F3/0653 , G06F3/0655 , G06F3/0658 , G06F3/0659 , G06F3/0664 , G06F3/0665 , G06F3/067 , G06F3/0673 , G06F3/0679 , G06F3/0683 , G06F3/0688 , G06F3/0689 , G06F8/65 , G06F9/30036 , G06F9/3887 , G06F9/4401 , G06F9/5016 , G06F9/5044 , G06F9/505 , G06F9/5072 , G06F9/5077 , G06F9/544 , G06F11/141 , G06F11/3414 , G06F12/0862 , G06F12/0893 , G06F12/10 , G06F12/109 , G06F12/1408 , G06F13/161 , G06F13/1668 , G06F13/1694 , G06F13/4022 , G06F13/4068 , G06F13/409 , G06F13/42 , G06F13/4282 , G06F15/8061 , G06F16/9014 , G06F2209/5019 , G06F2209/5022 , G06F2212/1008 , G06F2212/1024 , G06F2212/1041 , G06F2212/1044 , G06F2212/152 , G06F2212/202 , G06F2212/401 , G06F2212/402 , G06F2212/7207 , G06Q10/06 , G06Q10/06314 , G06Q10/087 , G06Q10/20 , G06Q50/04 , G07C5/008 , G08C17/02 , G08C2200/00 , G11C5/02 , G11C5/06 , G11C7/1072 , G11C11/56 , G11C14/0009 , H03M7/30 , H03M7/3084 , H03M7/3086 , H03M7/40 , H03M7/4031 , H03M7/4056 , H03M7/4081 , H03M7/6005 , H03M7/6023 , H04B10/25 , H04B10/2504 , H04L9/0643 , H04L9/14 , H04L9/3247 , H04L9/3263 , H04L12/2809 , H04L29/12009 , H04L41/024 , H04L41/046 , H04L41/0813 , H04L41/082 , H04L41/0896 , H04L41/12 , H04L41/145 , H04L41/147 , H04L41/5019 , H04L43/065 , H04L43/08 , H04L43/0817 , H04L43/0876 , H04L43/0894 , H04L43/16 , H04L45/02 , H04L45/52 , H04L47/24 , H04L47/38 , H04L47/765 , H04L47/782 , H04L47/805 , H04L47/82 , H04L47/823 , H04L49/00 , H04L49/15 , H04L49/25 , H04L49/357 , H04L49/45 , H04L49/555 , H04L67/02 , H04L67/10 , H04L67/1004 , H04L67/1008 , H04L67/1012 , H04L67/1014 , H04L67/1029 , H04L67/1034 , H04L67/1097 , H04L67/12 , H04L67/16 , H04L67/306 , H04L67/34 , H04L69/04 , H04L69/329 , H04Q1/04 , H04Q11/00 , H04Q11/0003 , H04Q11/0062 , H04Q11/0071 , H04Q2011/0037 , H04Q2011/0041 , H04Q2011/0052 , H04Q2011/0073 , H04Q2011/0079 , H04Q2011/0086 , H04Q2213/13523 , H04Q2213/13527 , H04W4/023 , H04W4/80 , H05K1/0203 , H05K1/181 , H05K5/0204 , H05K7/1418 , H05K7/1421 , H05K7/1422 , H05K7/1442 , H05K7/1447 , H05K7/1461 , H05K7/1485 , H05K7/1487 , H05K7/1489 , H05K7/1491 , H05K7/1492 , H05K7/1498 , H05K7/2039 , H05K7/20709 , H05K7/20727 , H05K7/20736 , H05K7/20745 , H05K7/20836 , H05K13/0486 , H05K2201/066 , H05K2201/10121 , H05K2201/10159 , H05K2201/10189 , Y02D10/14 , Y02D10/151 , Y02P90/30 , Y04S10/54 , Y10S901/01
Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
-
-
-
-
-
-
-
-
-