Prefetch-adaptive intelligent cache replacement policy for high performance

    公开(公告)号:US12093188B2

    公开(公告)日:2024-09-17

    申请号:US17719304

    申请日:2022-04-12

    CPC classification number: G06F12/12 G06F9/30043 G06F9/3816 G06F9/5044

    Abstract: The invention discloses a prefetch-adaptive intelligent cache replacement policy for high performance, in the presence of hardware prefetching, a prefetch request and a demand request are distinguished, a prefetch predictor based on an ISVM (Integer Support Vector Machine) is used for carrying out re-reference interval prediction on a cache line of prefetching access loading, and a demand predictor based on an ISVM is utilized to carry out re-reference interval prediction on a cache line of demand access loading. A PC of a current access load instruction and PCs of past load instructions in an access historical record are input, different ISVM predictors are designed for prefetch and demand requests, reuse prediction is performed on a loaded cache line by taking a request type as granularity, the accuracy of cache line reuse prediction in the presence of prefetching is improved, and performance benefits from hardware prefetching and cache replacement is better fused.

    METHOD FOR FEDERATED-LEARNING-BASED MOBILE EDGE CACHE OPTIMIZATION

    公开(公告)号:US20240427700A1

    公开(公告)日:2024-12-26

    申请号:US18702683

    申请日:2022-05-13

    Abstract: The invention relates to an optimization method for mobile edge cache based on federated learning, and belongs to the field of Internet of things and artificial intelligence. According to the method, the situation that the user mobility and the content popularity change continuously in the range of a single base station is considered, and the cache hit rate is increased by predicting the content popularity and placing the request content in an edge cache in advance. The method specifically comprises the steps of obtaining a user moment trajectory table to simulate a moving path of a user by using an RWP random path point model, selecting the user participating in FL local training in a clustering, and threshold value combination mode in consideration of local training consumption, performing global model aggregation by using an attention mechanism to control model weight, and performing global prediction according to an obtained global prediction model. The predicted request content is cached to the server in advance to improve the cache hit rate. According to the method, a federated learning method is utilized; user selection and weight aggregation are optimized; and the effective federated learning method is implemented, so that the local training consumption is reduced, and the cache hit rate is increased.

    Prefetch-Adaptive Intelligent Cache Replacement Policy for High Performance

    公开(公告)号:US20220374367A1

    公开(公告)日:2022-11-24

    申请号:US17719304

    申请日:2022-04-12

    Abstract: The invention discloses a prefetch-adaptive intelligent cache replacement policy for high performance, in the presence of hardware prefetching, a prefetch request and a demand request are distinguished, a prefetch predictor based on an ISVM (Integer Support Vector Machine) is used for carrying out re-reference interval prediction on a cache line of prefetching access loading, and a demand predictor based on an ISVM is utilized to carry out re-reference interval prediction on a cache line of demand access loading. A PC of a current access load instruction and PCs of past load instructions in an access historical record are input, different ISVM predictors are designed for prefetch and demand requests, reuse prediction is performed on a loaded cache line by taking a request type as granularity, the accuracy of cache line reuse prediction in the presence of prefetching is improved, and performance benefits from hardware prefetching and cache replacement is better fused.

Patent Agency Ranking