-
公开(公告)号:US10915461B2
公开(公告)日:2021-02-09
申请号:US16292762
申请日:2019-03-05
发明人: Ekaterina M. Ambroladze , Robert J. Sonnelitter, III , Matthias Klein , Craig Walters , Kevin Lopes , Michael A. Blake , Tim Bronson , Kenneth Klapproth , Vesselina Papazova , Hieu T Huynh
IPC分类号: G06F12/126 , G06F12/084 , G06F12/0811
摘要: Embodiments of the present invention are directed to a computer-implemented method for cache eviction. The method includes detecting a first data in a shared cache and a first cache in response to a request by a first processor. The first data is determined to have a mid-level cache eviction priority. A request is detected from a second processor for a same first data as requested by the first processor. However, in this instance, the second processor has indicated that the same first data has a low-level cache eviction priority. The first data is duplicated and loaded to a second cache, however, the data has a low-level cache eviction priority at the second cache.
-
公开(公告)号:US20200379760A1
公开(公告)日:2020-12-03
申请号:US16423713
申请日:2019-05-28
发明人: Christian Jacobi , Matthias Klein , Martin Recktenwald , Anthony Saporito , Robert J. Sonnelitter, III
IPC分类号: G06F9/30 , G06F12/0815
摘要: In one example implementation according to aspects of the present disclosure, a computer-implemented method for executing a load instruction with a timeout includes receiving, by a processing device, the load instruction. The method further includes attempting, by the processing device, to load a lock on a cache line of a memory. The method further includes determining, by the processing device, whether the timeout has expired prior to a successful loading of the lock on the cache line. The method further includes , responsive to determining that the timeout has expired, executing, by the processing device, another instruction instead of loading the lock on the cache line.
-
3.
公开(公告)号:US20190018775A1
公开(公告)日:2019-01-17
申请号:US15651543
申请日:2017-07-17
发明人: Ekaterina M. Ambroladze , Timothy C. Bronson , Matthias Klein , Pak-kin Mak , Vesselina K. Papazova , Robert J. Sonnelitter, III , Lahiruka S. Winter
IPC分类号: G06F12/0831 , G06F13/16
CPC分类号: G06F12/0831 , G06F13/1615 , G06F2212/60 , G06F2212/621
摘要: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
-
公开(公告)号:US20180374522A1
公开(公告)日:2018-12-27
申请号:US15629923
申请日:2017-06-22
发明人: Ekaterina M. Ambroladze , Sascha Junghans , Matthias Klein , Pak-Kin Mak , Robert J. Sonnelitter, III , Chad G. Wilson
摘要: A system and method to transfer an ordered partial store of data from a controller to a memory subsystem receives the ordered partial store of data into a buffer of the controller. The method also includes issuing a preinstall command to the memory subsystem, wherein the preinstall command indicates that data from a number of addresses of memory corresponding with a target memory location be obtained in local memory of the memory subsystem along with ownership of the data for subsequent use. A query command is issued to the memory subsystem. The query command requests an indication from the memory subsystem that the memory subsystem is ready to receive and correctly serialize the ordered partial store of data. The ordered partial store of data is transferred from the controller to the memory subsystem.
-
公开(公告)号:US20180341422A1
公开(公告)日:2018-11-29
申请号:US15603728
申请日:2017-05-24
发明人: Deanna P. Berger , Michael A. Blake , Ashraf Elsharif , Kenneth D. Klapproth , Pak-kin Mak , Robert J. Sonnelitter, III , Guy G. Tracy
IPC分类号: G06F3/06 , G06F12/0893 , G06F12/0842
CPC分类号: G06F12/0842 , G06F12/0893 , G06F2212/62
摘要: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.
-
6.
公开(公告)号:US20180307628A1
公开(公告)日:2018-10-25
申请号:US15496525
申请日:2017-04-25
发明人: Michael A. Blake , Pak-kin Mak , Robert J. Sonnelitter, III , Timothy W. Steele , Gary E. Strait , Poornima P. Sulibele , Guy G. Tracy
IPC分类号: G06F12/14 , G06F12/0891 , G06F13/40
CPC分类号: G06F12/1466 , G06F12/0891 , G06F13/4036 , G06F2212/1052
摘要: A computer implemented method for avoiding false activation of hang avoidance mechanisms of a system is provided. The computer implemented method includes receiving, by a nest of the system, rejects from a processor core of the system. The rejects are issued based on a cache line being locked by the processor core. The computer implemented method includes accumulating the rejects by the nest. The computer implemented method includes determining, by the nest, when an amount of the rejects accumulated by the nest has met or exceeded a programmable threshold. The computer implemented method also includes triggering, by the nest, a global reset to counters of the hang avoidance mechanisms of a system in response to the amount meeting or exceeding the programmable threshold.
-
公开(公告)号:US09734110B2
公开(公告)日:2017-08-15
申请号:US14621467
申请日:2015-02-13
CPC分类号: G06F13/4068 , G06F13/4221
摘要: In one embodiment, a computer-implemented method includes instructing two or more processors that are operating in a normal state of a symmetric multiprocessing (SMP) network to transition from the normal state to a slow state. The two or more processors reduce their frequencies to respective target frequencies in a transitional state when transitioning from the normal state to the slow state. It is determined that the two or more processors have achieved their respective target frequencies for the slow state. The slow state is entered, responsive to this determination. Responsive to entering the slow state, a first processor of the two or more processors is instructed to send empty packets across an interconnect to compensate for a first greatest potential rate differential between the first processor and a remainder of the two or more processors during the slow state.
-
公开(公告)号:US09189415B2
公开(公告)日:2015-11-17
申请号:US13657100
申请日:2012-10-22
IPC分类号: G06F13/14 , G06F12/02 , G06F12/08 , G06F13/16 , G11C11/406
CPC分类号: G06F12/0893 , G06F13/14 , G06F13/16 , G06F13/1605 , G06F13/1636 , G11C11/406 , G11C11/40603 , G11C11/40618 , G11C2207/104
摘要: A method for implementing embedded dynamic random access memory (eDRAM) refreshing in a high performance cache architecture. The method includes receiving a memory access request, via a cache controller, from a memory refresh requestor, the memory access request for a memory address range in a cache memory. The method also includes detecting that the cache memory located at the memory address range is available to receive the memory access request and sending the memory access request to a memory request interpreter. The method further includes receiving the memory access request from the cache controller, determining that the memory access request is a request to refresh contents of the memory address range in the cache memory, and refreshing data in the memory address range.
摘要翻译: 一种用于在高性能高速缓存架构中实现嵌入式动态随机存取存储器(eDRAM)刷新的方法。 该方法包括经由高速缓存控制器从存储器刷新请求器接收对高速缓冲存储器中的存储器地址范围的存储器访问请求的存储器访问请求。 该方法还包括检测位于存储器地址范围的高速缓冲存储器可用于接收存储器访问请求并将存储器访问请求发送到存储器请求解释器。 该方法还包括从高速缓存控制器接收存储器访问请求,确定存储器访问请求是刷新高速缓冲存储器中的存储器地址范围的内容的请求,以及刷新存储器地址范围中的数据。
-
公开(公告)号:US20130046926A1
公开(公告)日:2013-02-21
申请号:US13657100
申请日:2012-10-22
CPC分类号: G06F12/0893 , G06F13/14 , G06F13/16 , G06F13/1605 , G06F13/1636 , G11C11/406 , G11C11/40603 , G11C11/40618 , G11C2207/104
摘要: A method for implementing embedded dynamic random access memory (eDRAM) refreshing in a high performance cache architecture. The method includes receiving a memory access request, via a cache controller, from a memory refresh requestor, the memory access request for a memory address range in a cache memory. The method also includes detecting that the cache memory located at the memory address range is available to receive the memory access request and sending the memory access request to a memory request interpreter. The method further includes receiving the memory access request from the cache controller, determining that the memory access request is a request to refresh contents of the memory address range in the cache memory, and refreshing data in the memory address range.
摘要翻译: 一种用于在高性能高速缓存架构中实现嵌入式动态随机存取存储器(eDRAM)刷新的方法。 该方法包括经由高速缓存控制器从存储器刷新请求器接收对高速缓冲存储器中的存储器地址范围的存储器访问请求的存储器访问请求。 该方法还包括检测位于存储器地址范围的高速缓冲存储器可用于接收存储器访问请求并将存储器访问请求发送到存储器请求解释器。 该方法还包括从高速缓存控制器接收存储器访问请求,确定存储器访问请求是刷新高速缓冲存储器中的存储器地址范围的内容的请求,以及刷新存储器地址范围中的数据。
-
公开(公告)号:US10795824B2
公开(公告)日:2020-10-06
申请号:US16197669
申请日:2018-11-21
IPC分类号: G06F12/0891 , G06F12/084
摘要: Speculative data return in parallel with an exclusive invalidate request. A requesting processor requests data from a shared cache. The data is owned by another processor. Based on the request, an invalidate request is sent to the other processor requesting the other processor to release ownership of the data. Concurrent to the invalidate request being sent to the other processor, the data is speculatively provided to the requesting processor.
-
-
-
-
-
-
-
-
-