CACHE MANAGEMENT METHOD AND APPARATUS
    91.
    发明公开
    CACHE MANAGEMENT METHOD AND APPARATUS 审中-公开
    缓存管理方法和设备

    公开(公告)号:EP3188028A1

    公开(公告)日:2017-07-05

    申请号:EP16207047.8

    申请日:2016-12-28

    Abstract: This application relates to a cache management method and apparatus, so as to improve cache efficiency and reduce waste of cache resources. The cache management method provided in this application includes: after receiving a to-be-processed command, determining a quantity of cache units needed by the to-be-processed command; if the quantity of cache units needed by the to-be-processed command is one, searching for, based on a cache unit pair first state table, a pair of cache units in which only one cache unit is idle, and allocating the idle cache unit in the pair of cache units to the to-be-processed command; and if the quantity of cache units needed by the to-be-processed command is two, searching for and allocating, based on a cache unit pair second state table in a clock cycle, a pair of cache units in which two cache units are both idle to the to-be-processed command.

    Abstract translation: 本申请涉及缓存管理方法和装置,以提高缓存效率并减少缓存资源的浪费。 本申请提供的缓存管理方法包括:在接收到待处理命令后,确定待处理命令需要的缓存单元数量; 如果所述待处理命令需要的缓存单元数量为1,则基于缓存单元对第一状态表查找其中只有一个缓存单元空闲的缓存单元对,并分配所述空闲缓存 将所述一对高速缓存单元中的所述单元分配给所述待处理命令; 若所述待处理命令所需的缓存单元数量为2,则在一个时钟周期内基于缓存单元对第二状态表查找并分配两个缓存单元均为两个的缓存单元对 空闲到待处理命令。

    MEMORY NODE WITH CACHE FOR EMULATED SHARED MEMORY COMPUTERS
    92.
    发明公开
    MEMORY NODE WITH CACHE FOR EMULATED SHARED MEMORY COMPUTERS 审中-公开
    记忆节点与缓存共享内存计算机的缓存

    公开(公告)号:EP3188025A1

    公开(公告)日:2017-07-05

    申请号:EP15202883.3

    申请日:2015-12-29

    Inventor: FORSELL, Martti

    CPC classification number: G06F12/084 G06F12/0842 G06F12/0846 G06F12/0853

    Abstract: Data memory node (400) for ESM (Emulated Shared Memory) architectures (100, 200), comprising a data memory module (402) containing data memory for storing input data therein and retrieving stored data therefrom responsive to predetermined control signals, a multi-port cache (404) for the data memory, said cache being provided with at least one read port (404A, 404B) and at least one write port (404C, 404D, 404E), said cache (404) being configured to hold recently and/or frequently used data stored in the data memory (402), and an active memory unit (406) at least functionally connected to a plurality of processors via an interconnection network (108), said active memory unit (406) being configured to operate the cache (404) upon receiving a multioperation reference (410) incorporating a memory reference to the data memory of the data memory module from a number of processors of said plurality, wherein responsive to the receipt of the multioperation reference the active memory unit (406) is configured to process the multioperation reference according to the type of the multioperation indicated in the reference, utilizing cached data in accordance with the memory reference and data provided in the multioperation reference. A method to be performed by the memory node is also presented.

    Abstract translation: 用于ESM(仿真共享存储器)体系结构(100,200)的数据存储器节点(400)包括数据存储器模块(402),该数据存储器模块包含用于在其中存储输入数据并响应预定控制信号从其检索存储数据的数据存储器, 用于所述数据存储器的端口高速缓存(404),所述高速缓存设置有至少一个读端口(404A,404B)和至少一个写端口(404C,404D,404E),所述高速缓存(404) /或频繁使用的存储在数据存储器(402)中的数据以及至少经由互连网络(108)至少功能性地连接到多个处理器的主动式存储器单元(406),所述主动式存储器单元(406)被配置为操作 所述高速缓存(404)在从所述多个处理器中的多个处理器接收到并入有到所述数据存储器模块的数据存储器的存储器引用的多操作参考(410)时,响应于接收到所述多操作参考,所述主动存储器单元 )被配置为根据参考中指示的多操作的类型来处理多操作参考,利用根据多操作参考中提供的存储器参考和数据的高速缓存数据。 还提出了由存储节点执行的方法。

    MULTI-CORE PROCESSOR MANAGEMENT METHOD AND DEVICE
    93.
    发明公开
    MULTI-CORE PROCESSOR MANAGEMENT METHOD AND DEVICE 有权
    VERFAHREN UND VORRICHTUNG ZUR VERWALTUNG EINES MULTIKERNPROZESSORS

    公开(公告)号:EP3121684A4

    公开(公告)日:2017-06-07

    申请号:EP14897637

    申请日:2014-07-14

    Abstract: The present invention discloses a method for managing a multi-core processor. The method includes: if a current working mode of the multi-core processor is an asymmetric multiprocessing ASMP mode, a working frequency of at least one other processor than one processor that requests data is less than a first frequency, and a difference between a cache hit ratio value corresponding to the one processor and a cache hit ratio value corresponding to the at least one other processor is greater than or equal to a first threshold, switching the working mode of the multi-core processor to a symmetric multiprocessing SMP mode; or if a current working mode of the multi-core processor is an SMP mode, a cache hit ratio value corresponding to the one processor is greater than or equal to a second threshold, usage rates are unbalanced between processors in the multi-core processor, and usage rates of N processors in the multi-core processor are greater than a first usage threshold, switching the working mode of the multi-core processor to an ASMP mode, where N is greater than or equal to 1, and is less than or equal to a quantity of processors in the multi-core processor minus one.

    Abstract translation: 本发明公开了一种管理多核处理器的方法。 该方法包括:如果多核处理器的当前工作模式为非对称多处理ASMP模式,则除了请求数据的一个处理器之外的至少一个其他处理器的工作频率小于第一频率,并且高速缓存 将所述多核处理器的工作模式切换为对称多处理SMP模式;将所述多核处理器的工作模式切换为对称多处理SMP模式,并将所述多核处理器的工作模式切换为对称多处理SMP模式; 或者,如果多核处理器的当前工作模式为SMP模式,则该一个处理器对应的缓存命中率值大于或等于第二阈值,则多核处理器中的处理器之间的使用率不平衡, 且所述多核处理器中的N个处理器的使用率大于第一使用阈值,则将所述多核处理器的工作模式切换为ASMP模式,其中N大于或等于1且小于或等于 等于多核处理器中的处理器数量减去一个。

    CACHE POOLING FOR COMPUTING SYSTEMS
    94.
    发明公开
    CACHE POOLING FOR COMPUTING SYSTEMS 审中-公开
    CACHE-POOLINGFÜRDATENVERARBEITUNGSANLAGEN

    公开(公告)号:EP3109765A1

    公开(公告)日:2016-12-28

    申请号:EP16178779.1

    申请日:2009-02-03

    Abstract: In a computing system a method and apparatus for cache pooling is introduced. Threads are assigned priorities based on the criticality of their tasks. The most critical threads are assigned to main memory locations such that they are subject to limited or no cache contention.
    Less critical threads are assigned to main memory locations such that their cache contention with critical threads is minimized or eliminated. Thus, overall system performance is improved, as critical threads execute in a substantially predictable manner.

    Abstract translation: 在计算系统中,介绍了一种用于缓存池的方法和装置。 线程根据其任务的关键性分配优先级。 最关键的线程被分配给主存储器位置,使得它们受到有限的或没有高速缓存的争用。 不太关键的线程被分配给主存储器位置,使得它们与关键线程的高速缓存争用被最小化或消除。 因此,随着关键线程以基本可预测的方式执行,整体系统性能得到改善。

    MEMORY LATENCY MANAGEMENT
    95.
    发明公开
    MEMORY LATENCY MANAGEMENT 审中-公开
    SPEICHERLATENZMANAGEMENT

    公开(公告)号:EP2972916A1

    公开(公告)日:2016-01-20

    申请号:EP14778162.9

    申请日:2014-02-26

    Abstract: Apparatus, systems, and methods to manage memory latency operations are described. In one embodiment, an electronic device comprises a processor and a memory control logic to receive data from a remote memory device, store the data in a local cache memory, receive an error correction code indicator associated with the data, and implement a data management policy in response to the error correction code indicator. Other embodiments are also disclosed and claimed.

    Abstract translation: 描述了用于管理存储器延迟操作的装置,系统和方法。 在一个实施例中,电子设备包括处理器和用于从远程存储器设备接收数据的存储器控​​制逻辑,将数据存储在本地高速缓冲存储器中,接收与数据相关联的纠错码指示符,并实现数据管理策略 响应于纠错码指示器。 还公开并要求保护其他实施例。

Patent Agency Ranking