MAINTENANCE METHOD FOR TRANSLATION LOOKASIDE BUFFER, AND RELATED DEVICE

    公开(公告)号:EP4428696A1

    公开(公告)日:2024-09-11

    申请号:EP22897463.0

    申请日:2022-10-18

    CPC classification number: Y02D10/00 G06F12/1009 G06F12/0888 G06F12/1027

    Abstract: Embodiments of this application disclose a translation lookaside buffer maintenance method and a related device. The method is applied to an electronic device, the electronic device includes a plurality of physical central processing units CPUs, a first process is run on the electronic device, the first process currently includes M first threads, the M first threads are currently being respectively run on M physical CPUs of the plurality of physical CPUs, and M is an integer greater than or equal to 1. The method includes: determining a physical CPU range S1 currently corresponding to the first process, where the physical CPU range S1 includes the M physical CPUs on which the first threads in the first process are currently being run; and updating, based on page table information maintained by the first process, translation lookaside buffer TLB information maintained by all physical CPUs in the physical CPU range S 1. According to embodiments of this application, a TLB maintenance delay can be reduced.

    AGENTLESS REMOTE IO CACHING PREDICTION AND RECOMMENDATION

    公开(公告)号:EP3323049A1

    公开(公告)日:2018-05-23

    申请号:EP16825102.3

    申请日:2016-07-13

    Abstract: A host device includes a controller configured to receive input/output (TO) access information associated with an IO workload, the IO access information identifying at least one of a read action and a write action associated with the IO workload over a period of time. Based upon the received IO access information associated with the storage element, the controller is configured to derive a predicted cache access ratio associated with the IO workload and relating a predicted number of cache accesses associated with the IO workload with at least one of a total number of read actions and a total number of write actions associated with the IO workload. When the predicted cache access ratio reaches a threshold cache access ratio value, the controller is configured to identify the IO workload as an IO workload candidate for caching by the host device.

    Cache memory apparatus, cache control method, and microprocessor system
    8.
    发明授权
    Cache memory apparatus, cache control method, and microprocessor system 有权
    高速缓存设备,高速缓存控制方法和微处理器系统

    公开(公告)号:EP2590082B1

    公开(公告)日:2018-01-17

    申请号:EP12190816.4

    申请日:2012-10-31

    CPC classification number: G06F12/0888 G06F9/3804 G06F12/0875

    Abstract: A cache memory apparatus according to the present invention includes a cache memory that caches an instruction code corresponding to a fetch address and a cache control circuit that controls the instruction code to be cached in the cache memory. The cache control circuit caches an instruction code corresponding to a subroutine when the fetch address indicates a branch into the subroutine and disables the instruction code to be cached when the number of the instruction codes to be cached exceeds a previously set maximum number.

    PROCESSOR AND METHOD USING CREDIT-BASED FLOW CONTROL
    9.
    发明公开
    PROCESSOR AND METHOD USING CREDIT-BASED FLOW CONTROL 审中-公开
    维多利亚州麻省理工学院欧洲基金会基金会日期

    公开(公告)号:EP3182293A1

    公开(公告)日:2017-06-21

    申请号:EP16198478.6

    申请日:2016-11-11

    Inventor: WOONG, Seo

    CPC classification number: G06F12/0895 G06F12/0888 G06F12/0891 G06F2212/604

    Abstract: Provided is a processor including a plurality of devices. The processor includes a source processing device configured to identify data to request from another device, and a destination processing device configured to, in response to a request for the identified data from the source processing device using credit-based flow control, transmit the identified data to the source processing device using the credit-based flow control. The source processing device includes a credit buffer used for the credit-based flow control, the credit buffer being allocable to include a cache region configured to cache the transmitted identified data received by the source processing device.

    Abstract translation: 提供了包括多个设备的处理器。 该处理器包括:源处理设备,被配置为识别来自另一设备的请求的数据;以及目的地处理设备,被配置为响应于使用基于信用的流量控制来自源处理设备的所识别的数据的请求,发送所识别的数据 使用基于信用的流量控制到源处理装置。 源处理装置包括用于基于信用的流量控制的信用缓冲器,信用缓冲器可分配以包括被配置为缓存由源处理装置接收的发送的识别数据的高速缓存区域。

    INTELLIGENT CACHING
    10.
    发明公开
    INTELLIGENT CACHING 审中-公开
    INTELLIGENTE ZWISCHENSPEICHERUNG

    公开(公告)号:EP3073386A1

    公开(公告)日:2016-09-28

    申请号:EP15275088.1

    申请日:2015-03-27

    CPC classification number: G06F12/0888 G06F12/122 H04L67/2842

    Abstract: Examples of the invention present an optimised method of managing a cache, where data items, such as video clips, are moved up in rank in a cache whenever it is requested, swapping rank with the data item above it. New data items not present in the cache are obtained and stored in the cache with a rank below the lowest ranked data item. If the cache is full, then the data item at the lowest rank is replaced by the newly requested data item. Thus data items that receive increasing numbers of requests are moved up the cache and thus have more protection from deletion.

    Abstract translation: 本发明的示例呈现了一种管理高速缓存的优化方法,其中诸如视频剪辑的数据项在请求时在高速缓存中逐级向上移动,与其上的数据项交换秩。 高速缓存中不存在的新数据项被获取并存储在具有低于最低排名数据项的等级的高速缓存中。 如果缓存已满,则最低级别的数据项被新请求的数据项替换。 因此,接收越来越多请求的数据项向上移动缓存,因此具有更多的保护以免删除。

Patent Agency Ranking