MULTI-LEVEL SYSTEM MEMORY WITH NEAR MEMORY SCRUBBING BASED ON PREDICTED FAR MEMORY IDLE TIME
    1.
    发明申请
    MULTI-LEVEL SYSTEM MEMORY WITH NEAR MEMORY SCRUBBING BASED ON PREDICTED FAR MEMORY IDLE TIME 审中-公开
    基于预测远程存储器空闲时间的内存擦除多级系统存储器

    公开(公告)号:WO2018004801A1

    公开(公告)日:2018-01-04

    申请号:PCT/US2017/029175

    申请日:2017-04-24

    Abstract: An apparatus is described that includes a memory controller to interface to a multi-level system memory. The memory controller includes least recently used (LRU) circuitry to keep track of least recently used cache lines kept in a higher level of the multi-level system memory. The memory controller also includes idle time predictor circuitry to predict idle times of a lower level of the multi-level system memory. The memory controller is to write one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory in response to the idle time predictor circuitry indicating that an observed idle time of the lower level of the multi-level system memory is expected to be long enough to accommodate the write of the one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory.

    Abstract translation: 描述了一种装置,其包括与多级系统存储器接口的存储器控​​制器。 存储器控制器包括最近最少使用(LRU)电路以跟踪保持在多级系统存储器的较高级中的最近最少使用的高速缓存行。 存储器控制器还包括空闲时间预测器电路,以预测多级系统存储器的较低级别的空闲时间。 存储器控制器响应于空闲时间预测器电路系统指示观察到的空闲时间是从多级系统存储器的较高级别向多级别系统存储器的较低级别写入一个或多个较少使用的高速缓存行 预期多级系统存储器的较低级别足够长以容纳从多级系统存储器的较高级别到多级别系统的较低级别的一个或多个较少使用的高速缓存行的写入 存储器中。

    PRE-LOADING PAGE TABLE CACHE LINES OF A VIRTUAL MACHINE
    3.
    发明申请
    PRE-LOADING PAGE TABLE CACHE LINES OF A VIRTUAL MACHINE 审中-公开
    预装页面表缓存虚拟机的线路

    公开(公告)号:WO2017072610A1

    公开(公告)日:2017-05-04

    申请号:PCT/IB2016/055772

    申请日:2016-09-27

    Inventor: KAPOOR, Shakti

    Abstract: Embodiments herein pre-load memory translations used to perform virtual to physical memory translations in a computing system that switches between virtual machines (VMs). Before a processor switches from executing the current VM to the new VM, a hypervisor may retrieve previously saved memory translations for the new VM and load them into cache or main memory. Thus, when the new VM begins to execute, the corresponding memory translations are in cache rather than in storage. Thus, when these memory translations are needed to perform virtual to physical address translations, the processor does not have to wait to pull the memory translations for slow storage devices (e.g., a hard disk drive).

    Abstract translation: 这里的实施例预加载用于在虚拟机(VM)之间切换的计算系统中执行虚拟到物理存储器转换的存储器转换。 在处理器从执行当前VM切换到新VM之前,管理程序可以检索先前保存的用于新VM的内存翻译并将它们加载到缓存或主存储器中。 因此,当新VM开始执行时,相应的内存转换将在缓存中而不是存储中。 因此,当需要这些存储器转换来执行虚拟到物理地址转换时,处理器不必等待为慢速存储设备(例如,硬盘驱动器)提取存储器转换。

    METHOD FOR MANAGING A DISTRIBUTED CACHE
    4.
    发明申请
    METHOD FOR MANAGING A DISTRIBUTED CACHE 审中-公开
    管理分布式高速缓存的方法

    公开(公告)号:WO2017005761A1

    公开(公告)日:2017-01-12

    申请号:PCT/EP2016/065897

    申请日:2016-07-06

    Applicant: ALCATEL LUCENT

    Abstract: A method for managing a multiple level cache of a host comprising a primary cache which is a volatile memory such as a DRAM memory and a secondary cache which is a non-volatile memory such as a SSD memory. The method comprises, if a segment identification data has been computed in said segment hash table, a corresponding processing core checks whether a corresponding packet is stored in a first portion of a primary cache or in a second portion of a secondary cache, - if the packet is stored in said first portion, said corresponding packet is sent back to a requester and a request counter is incremented, a DRAM segment map pointer entering in a DRAM-LRU linked list, the DRAM segment map pointer being prioritized by being moved on top of said DRAM-LRU linked list, - if the packet is stored in said second portion, said corresponding packet is passed to an SSD core so as to copy the entire given segment from the secondary cache to the primary cache; then said request is passed back to said corresponding processing core in order to create the DRAM segment map pointer for pointing to the first portion storing said corresponding packet so as to be entered in said DRAM-LRU linked list, the SSD segment map pointer being also entered in said SSD-LRU linked list, the DRAM segment map pointer and the SSD segment map pointer being respectively prioritized by being respectively moved on top of said DRAM-LRU linked list and said SSD-LRU linked list; then said corresponding packet is sent back to said requester.

    Abstract translation: 一种用于管理主机的多级高速缓存的方法,包括作为易失性存储器的主缓存器,诸如DRAM存储器和作为诸如SSD存储器的非易失性存储器的次级高速缓存。 该方法包括:如果在所述分段哈希表中已经计算了段识别数据,则对应的处理核心检查相应的分组是否被存储在主高速缓存的第一部分或二级高速缓存的第二部分中,如果 分组存储在所述第一部分中,所述相应分组被发送回请求者并且请求计数器递增,DRAM段映射指针进入DRAM-LRU链表,DRAM段映射指针通过在顶部移动而被优先化 的所述DRAM-LRU链表, - 如果所述分组被存储在所述第二部分中,则所述相应分组被传递到SSD核心,以将整个给定分段从所述辅助高速缓存复制到所述主高速缓存; 那么所述请求被传递回所述对应的处理核心,以便创建用于指向存储所述相应分组的第一部分的DRAM段映射指针,以便被输入到所述DRAM-LRU链表中,SSD段地图指针也是 进入所述SSD-LRU链表,DRAM段映射指针和SSD段映射指针分别通过分别移动到所述DRAM-LRU链表和所述SSD-LRU链表之上来优先化; 然后将相应的分组发送回所述请求者。

    IMPROVING STORAGE CACHE PERFORMANCE BY USING COMPRESSIBILITY OF THE DATA AS A CRITERIA FOR CACHE INSERTION
    5.
    发明申请
    IMPROVING STORAGE CACHE PERFORMANCE BY USING COMPRESSIBILITY OF THE DATA AS A CRITERIA FOR CACHE INSERTION 审中-公开
    通过使用数据的可压缩性来提高存储缓存的性能,作为缓存执行的标准

    公开(公告)号:WO2016160164A1

    公开(公告)日:2016-10-06

    申请号:PCT/US2016/018517

    申请日:2016-02-18

    Abstract: Methods and apparatus related to improving storage cache performance by using compressibility of the data as a criteria for cache insertion or allocation and deletion are described. In one embodiment, memory stores one or more cache lines corresponding to a compressed version of data (e.g., in response to a determination that the data is compressible). It is determined whether the one or more cache lines are to be retained or inserted in the memory based at least in part on an indication of compressibility of the data. Other embodiments are also disclosed and claimed.

    Abstract translation: 描述了通过使用数据的可压缩性作为缓存插入或分配和删除的标准来改善存储高速缓存性能的方法和装置。 在一个实施例中,存储器存储对应于数据的压缩版本的一个或多个高速缓存行(例如,响应于数据可压缩的确定)。 至少部分地基于数据的可压缩性的指示确定一个或多个高速缓存行是要被保留还是插入到存储器中。 还公开并要求保护其他实施例。

    一种资源调度方法以及相关装置
    6.
    发明申请

    公开(公告)号:WO2016101115A1

    公开(公告)日:2016-06-30

    申请号:PCT/CN2014/094581

    申请日:2014-12-23

    Abstract: 本发明实施例提供了一种资源调度方法,用于提升数据IO效率。本发明实施例提供的资源调度方法包括:确定当前的任务队列,所述任务队列中包括多个待执行的应用任务;确定所述应用任务所要访问的磁盘中的数据块中,每个数据块待被所述应用任务访问的次数;根据所述每个数据块待被所述应用任务访问的次数,确定热点数据块;向所述热点数据块的本地节点发送移入指令,所述移入指令用于指示将所述热点数据块移入内存,使得所述热点数据块可以在内存中被访问。本发明实施例还提出了相关的资源调度装置。

    HIERARCHICAL CACHING FOR ONLINE MEDIA
    7.
    发明申请
    HIERARCHICAL CACHING FOR ONLINE MEDIA 审中-公开
    在线媒体的分层缓存

    公开(公告)号:WO2016059469A1

    公开(公告)日:2016-04-21

    申请号:PCT/IB2015/001980

    申请日:2015-10-06

    Applicant: ALCATEL LUCENT

    Abstract: A method include receiving, at a first cache device ( 135A-D), a request to send a first asset to a second device (110, 135A-D); determining whether the first asset is stored at the first cache device; when the determining whether the first asset is stored at the first cache device indicates that first asset is not stored at the first cache device, obtaining, at the first cache device, the first asset, performing a comparison operation based on an average inter- arrival time of the first asset with respect to the first cache device and a characteristic time of the first cache device, the characteristic time of the first cache device being an average period of time assets cached at the first cache device are cached before being evicted from the first cache device, and determining whether or not to cache the obtained first asset at the first cache device based on the comparison; and sending the obtained first asset to the second device.

    Abstract translation: 一种方法包括在第一高速缓存设备(135A-D)处接收向第二设备(110,135A-D)发送第一资产的请求; 确定所述第一资产是否存储在所述第一高速缓存设备处; 当确定第一资产是否存储在第一高速缓存装置中时,指示第一资产没有被存储在第一高速缓存装置中,在第一高速缓存装置获取第一资产,基于平均到达间隔执行比较操作 相对于第一高速缓存设备的第一资源的时间和第一高速缓存设备的特征时间,第一高速缓存设备的特征时间是在第一高速缓存设备处被高速缓存的资产的平均时段被高速缓存, 第一高速缓存设备,并且基于所述比较来确定是否在所述第一高速缓存设备上缓存所获得的第一资源; 并将获得的第一资产发送到第二设备。

    METHOD, APPARATUS AND SYSTEM TO CACHE SETS OF TAGS OF AN OFF-DIE CACHE MEMORY
    8.
    发明申请
    METHOD, APPARATUS AND SYSTEM TO CACHE SETS OF TAGS OF AN OFF-DIE CACHE MEMORY 审中-公开
    方法,装置和系统,用于缓存一个离线高速缓存存储器的标签集

    公开(公告)号:WO2015148026A1

    公开(公告)日:2015-10-01

    申请号:PCT/US2015/017125

    申请日:2015-02-23

    Abstract: Techniques and mechanism to provide a cache of cache tags in determining an access to cached data. In an embodiment, a tag storage stores a first set including tags associated with respective data locations of a cache memory. A cache of cache tags stores a subset of tags stored by the tag storage. Where a tag of the first set is to be stored to the cache of cache tags, all tags of the first set are stored to the first portion. In another embodiment, any storage of tags of the first set to the cache of cache tags includes storage of the tags of the first set to only a first portion of the cache of cache tags. A replacement table is maintained for use in evicting or replacing cached tags based on an indicated level of activity for a set of the cache of cache tags.

    Abstract translation: 在确定对缓存数据的访问时提供高速缓存标签缓存的技术和机制。 在一个实施例中,标签存储存储包括与高速缓冲存储器的相应数据位置相关联的标签的第一集合。 高速缓存标签的缓存存储标签存储器存储的标签的子集。 在将第一组的标签存储到高速缓存标签的高速缓存中的情况下,将第一组的所有标签存储到第一部分。 在另一个实施例中,将第一集合的标签的任何存储器存储到高速缓存标签的高速缓存包括将第一集合的标签仅存储在高速缓存标签的高速缓存的仅第一部分上。 维护一个替换表,用于根据一组高速缓存标签缓存的指定级别的活动逐出或替换高速缓存的标签。

    MAGNETORESISTIVE RANDOM-ACCESS MEMORY CACHE WRITE MANAGEMENT
    9.
    发明申请
    MAGNETORESISTIVE RANDOM-ACCESS MEMORY CACHE WRITE MANAGEMENT 审中-公开
    磁性随机存取存储器高速缓存写入管理

    公开(公告)号:WO2015147868A1

    公开(公告)日:2015-10-01

    申请号:PCT/US2014/032215

    申请日:2014-03-28

    Inventor: SOLIHIN, Yan

    Abstract: Technologies are generally described manage MRAM cache writes in processors. In some examples, when a write request is received with data to be stored in an MRAM cache, the data may be evaluated to determine whether the data is to be further processed. In response to a determination that the data is to be further processed, the data may be stored in a write cache associated with the MRAM cache. In response to a determination that the data is not to be further processed, the data may be stored in the MRAM cache.

    Abstract translation: 技术通常被描述为在处理器中管理MRAM缓存写入。 在一些示例中,当接收到要存储在MRAM高速缓存中的数据的写入请求时,可以评估数据以确定数据是否被进一步处理。 响应于要进一步处理数据的确定,数据可以存储在与MRAM高速缓存相关联的写高速缓存中。 响应于确定数据不被进一步处理,数据可以存储在MRAM缓存中。

    COUNTERING ATTACKS ON CACHE
    10.
    发明申请
    COUNTERING ATTACKS ON CACHE 审中-公开
    缓缓行动的反击

    公开(公告)号:WO2015139195A1

    公开(公告)日:2015-09-24

    申请号:PCT/CN2014/073584

    申请日:2014-03-18

    Inventor: WANG, Xingyuan

    Abstract: In some examples of a virtual computing environment, multiple virtual machines may execute on a physical computing device while sharing the hardware components corresponding to the physical computing device. A hypervisor corresponding to the physical computing device may be configured to designate a portion of a cache to one of the virtual machines for storing data. The hypervisor may be further configured to identify hostile activities executed in the designated portion of cache and, further still, to implement security measures on those virtual machines on which the identified hostile activities are executed.

    Abstract translation: 在虚拟计算环境的一些示例中,多个虚拟机可以在物理计算设备上执行,同时共享与物理计算设备相对应的硬件组件。 对应于物理计算设备的虚拟机管理程序可以被配置为将高速缓存的一部分指定到用于存储数据的虚拟机之一。 管理程序可以被进一步配置为识别在缓存的指定部分中执行的敌对活动,并且还可以在执行所识别的敌对活动的那些虚拟机上实施安全措施。

Patent Agency Ranking