-
公开(公告)号:US11343177B2
公开(公告)日:2022-05-24
申请号:US17086320
申请日:2020-10-30
IPC分类号: H04L12/725 , H04L45/302 , H04L47/125 , H04L49/10 , H04L47/26 , H04L49/20
摘要: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
-
公开(公告)号:US20210051096A1
公开(公告)日:2021-02-18
申请号:US17086320
申请日:2020-10-30
IPC分类号: H04L12/725 , H04L12/803 , H04L12/933 , H04L12/825 , H04L12/931
摘要: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
-
公开(公告)号:US20210011864A1
公开(公告)日:2021-01-14
申请号:US17032056
申请日:2020-09-25
摘要: In one embodiment, an apparatus includes: a table to store a plurality of entries, each entry to identify a memory domain of a system and a coherency status of the memory domain; and a control circuit coupled to the table. The control circuit may be configured to receive a request to change a coherency status of a first memory domain of the system, and dynamically update a first entry of the table for the first memory domain to change the coherency status between a coherent memory domain and a non-coherent memory domain, in response to the request. Other embodiments are described and claimed.
-
公开(公告)号:US20200218669A1
公开(公告)日:2020-07-09
申请号:US16820630
申请日:2020-03-16
申请人: Ginger H. Gilsdorf , Karthik Kumar , Mark A. Schmisseur , Thomas Willhalm , Francesc Guim Bernat
发明人: Ginger H. Gilsdorf , Karthik Kumar , Mark A. Schmisseur , Thomas Willhalm , Francesc Guim Bernat
IPC分类号: G06F12/123 , G06F12/0891 , G06F12/02 , G06F1/14 , G11C7/22
摘要: An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.
-
5.
公开(公告)号:US20180089044A1
公开(公告)日:2018-03-29
申请号:US15277522
申请日:2016-09-27
摘要: Technologies for providing network interface support for remote memory and storage failover protection include a compute node. The compute node includes a memory to store one or more protected resources and a network interface. The network interface is to receive, from a requestor node in communication with the compute node, a request to access one of the protected resources. The request identifies the protected resource by a memory address. Additionally, the network interface is to determine an identity of the requestor node and determine, as a function of the identity and permissions data associated with the memory address, whether the requestor node has permission to access the protected resource. Other embodiments are described and claimed.
-
公开(公告)号:US20210064531A1
公开(公告)日:2021-03-04
申请号:US17092803
申请日:2020-11-09
IPC分类号: G06F12/0817
摘要: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory. In connection with DM cache operations, such as cache lines evicted from a CPU, logic implemented in hardware determines whether a cache line in a DM cache is to be convicted and implements the software-defined caching policy for the DM cache including associated memory coherency operations.
-
公开(公告)号:US20200026575A1
公开(公告)日:2020-01-23
申请号:US16586576
申请日:2019-09-27
IPC分类号: G06F9/50
摘要: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
-
公开(公告)号:US20200319696A1
公开(公告)日:2020-10-08
申请号:US16907264
申请日:2020-06-21
IPC分类号: G06F1/3234 , G06F12/0804 , G06N3/08 , G06N3/04 , G06F1/3287
摘要: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated. In one aspect, machine learning models trained on historical data are employed to project harvested energy levels that are used in detecting energy threshold conditions.
-
公开(公告)号:US20170185351A1
公开(公告)日:2017-06-29
申请号:US14998085
申请日:2015-12-24
IPC分类号: G06F3/06
CPC分类号: G06F12/0893 , G06F2212/1024 , G06F2212/603 , G06F2212/6042
摘要: Systems, apparatuses and methods may provide for detecting an issued request in a queue that is shared by a plurality of domains in a memory architecture, wherein the plurality of domains are associated with non-uniform access latencies. Additionally, a destination domain associated with the issued request may be determined. Moreover, a first set of additional requests may be prevented from being issued to the queue if the issued request satisfies an overrepresentation condition with respect to the destination domain and the first set of additional requests are associated with the destination domain. In one example, a second set of additional requests are permitted to be issued to the queue while the first set of additional requests are prevented from being issued to the queue, wherein the second set of additional requests are associated with one or more remaining domains in the plurality of domains.
-
公开(公告)号:US20190235773A1
公开(公告)日:2019-08-01
申请号:US16378828
申请日:2019-04-09
CPC分类号: G06F3/0622 , G06F3/0604 , G06F3/0658 , G06F3/0659 , G06F3/0685 , G06F21/6218
摘要: Examples relate to a memory controller or memory controller device for a memory pool of a computer system, to a management apparatus or management device for the computer system, and to an apparatus or device for a compute node of the computer system, and to corresponding methods and computer programs. The memory pool comprises computer memory that is accessible to a plurality of compute nodes of the computer system via the memory controller. The memory controller comprises interface circuitry for communicating with the plurality of compute nodes. The memory controller comprises control circuitry being configured to obtain an access control instruction via the interface circuitry. The access control instruction indicates that access to a portion of the computer memory of the memory pool is to be granted to one or more processes being executed by the plurality of compute nodes of the computer system. The access control instruction comprises information related to a node identifier and a process identifier for each of the one or more processes. The control circuitry is configured to provide access to the portion of the computer memory of the memory pool to the one or more processes based on the access control instruction.
-
-
-
-
-
-
-
-
-