-
公开(公告)号:US20190179757A1
公开(公告)日:2019-06-13
申请号:US15838809
申请日:2017-12-12
Applicant: ADVANCED MICRO DEVICES, INC.
Inventor: William L. WALKER , William E. JONES
IPC: G06F12/0862 , G06F12/0811 , G06F13/16 , G06F11/30
Abstract: A processing system includes an interconnect fabric coupleable to a local memory and at least one compute cluster coupled to the interconnect fabric. The compute cluster includes a processor core and a cache hierarchy. The cache hierarchy has a plurality of caches and a throttle controller configured to throttle a rate of memory requests issuable by the processor core based on at least one of an access latency metric and a prefetch accuracy metric. The access latency metric represents an average access latency for memory requests for the processor core and the prefetch accuracy metric represents an accuracy of a prefetcher of a cache of the cache hierarchy.
-
公开(公告)号:US20190188055A1
公开(公告)日:2019-06-20
申请号:US15847006
申请日:2017-12-19
Applicant: ADVANCED MICRO DEVICES, INC.
Inventor: Douglas Benson HUNT , William E. JONES
CPC classification number: G06F9/528 , G06F12/1425 , G06F13/1663
Abstract: A method of monitoring, by one or more cores of a multi-core processor, speculative instructions, where the speculative instructions store data to a shared memory location, and where a semaphore, associated with the memory location, specifies the availability of the memory location to store data. One or more speculative instructions are flushed based on when the semaphore specifies the memory location is unavailable. Any further speculative instructions are suppressed from being issued based on a count of flushed speculation instructions above a specified threshold, executing the speculative instructions when the semaphore specifies the memory location is available, and storing the data to the memory location.
-
公开(公告)号:US20190179770A1
公开(公告)日:2019-06-13
申请号:US15839089
申请日:2017-12-12
Applicant: ADVANCED MICRO DEVICES, INC.
Inventor: William L. WALKER , William E. JONES
IPC: G06F12/12 , G06F12/0882
Abstract: A processing system rinses, from a cache, those cache lines that share the same memory page as a cache line identified for eviction. A cache controller of the processing system identifies a cache line as scheduled for eviction. In response, the cache controller, identifies additional “dirty victim” cache lines (cache lines that have been modified at the cache and not yet written back to memory) that are associated with the same memory page, and writes each of the identified cache lines to the same memory page. By writing each of the dirty victim cache lines associated with the memory page to memory, the processing system reduces memory overhead and improves processing efficiency.
-
-