-
公开(公告)号:US20220374367A1
公开(公告)日:2022-11-24
申请号:US17719304
申请日:2022-04-12
发明人: Juan Fang , Huijing Yang , Ziyi Teng , Min Cai , Xuan Wang
摘要: The invention discloses a prefetch-adaptive intelligent cache replacement policy for high performance, in the presence of hardware prefetching, a prefetch request and a demand request are distinguished, a prefetch predictor based on an ISVM (Integer Support Vector Machine) is used for carrying out re-reference interval prediction on a cache line of prefetching access loading, and a demand predictor based on an ISVM is utilized to carry out re-reference interval prediction on a cache line of demand access loading. A PC of a current access load instruction and PCs of past load instructions in an access historical record are input, different ISVM predictors are designed for prefetch and demand requests, reuse prediction is performed on a loaded cache line by taking a request type as granularity, the accuracy of cache line reuse prediction in the presence of prefetching is improved, and performance benefits from hardware prefetching and cache replacement is better fused.
-
公开(公告)号:US12093188B2
公开(公告)日:2024-09-17
申请号:US17719304
申请日:2022-04-12
发明人: Juan Fang , Huijing Yang , Ziyi Teng , Min Cai , Xuan Wang
CPC分类号: G06F12/12 , G06F9/30043 , G06F9/3816 , G06F9/5044
摘要: The invention discloses a prefetch-adaptive intelligent cache replacement policy for high performance, in the presence of hardware prefetching, a prefetch request and a demand request are distinguished, a prefetch predictor based on an ISVM (Integer Support Vector Machine) is used for carrying out re-reference interval prediction on a cache line of prefetching access loading, and a demand predictor based on an ISVM is utilized to carry out re-reference interval prediction on a cache line of demand access loading. A PC of a current access load instruction and PCs of past load instructions in an access historical record are input, different ISVM predictors are designed for prefetch and demand requests, reuse prediction is performed on a loaded cache line by taking a request type as granularity, the accuracy of cache line reuse prediction in the presence of prefetching is improved, and performance benefits from hardware prefetching and cache replacement is better fused.
-