MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES
    31.
    发明申请
    MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES 有权
    在缓存中具有特定属性的高速缓存块的存在机制

    公开(公告)号:US20140181412A1

    公开(公告)日:2014-06-26

    申请号:US13725011

    申请日:2012-12-21

    CPC classification number: G06F12/0871 G06F12/0848

    Abstract: A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests. In response to receiving a request to allocate data of a first type, a cache controller allocates the data in the cache responsive to determining a limit of an amount of data of the first type permitted in the cache is not reached. The controller maintains an amount and location information of the data of the first type stored in the cache. Additionally, the cache may be partitioned with each partition designated for storing data of a given type. Allocation of data of the first type is dependent at least upon the availability of a first partition and a limit of an amount of data of the first type in a second partition.

    Abstract translation: 一种用于有效地限制高速缓冲存储器中具有特定属性的数据的存储空间的系统和方法。 计算系统包括缓存和用于存储器请求的一个或多个源。 响应于接收到分配第一类型的数据的请求,高速缓存控制器响应于确定未达到高速缓存中允许的第一类型的数据量的极限而分配缓存中的数据。 控制器维护存储在高速缓存中的第一类型的数据的量和位置信息。 此外,可以用指定用于存储给定类型的数据的每个分区对高速缓存进行分区。 第一类型的数据的分配至少依赖于第一分区的可用性和第二分区中第一类型的数据量的限制。

    MANAGEMENT OF CACHE SIZE
    32.
    发明申请
    MANAGEMENT OF CACHE SIZE 有权
    高速缓存大小管理

    公开(公告)号:US20140181410A1

    公开(公告)日:2014-06-26

    申请号:US13723093

    申请日:2012-12-20

    Abstract: In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached.

    Abstract translation: 响应处理器核心退出低功率状态,将高速缓存设置为最小大小,使得少于所有高速缓存的条目可用于存储数据,从而减少高速缓存的功耗。 随着时间的推移,可以增加高速缓存的大小以考虑到处理器活动的增加,从而确保处理效率不受减小的高速缓存大小的显着影响。 在一些实施例中,基于所测量的处理器性能度量(例如高速缓存的逐出速率)来增加高速缓存大小。 在一些实施例中,高速缓存大小以规则的间隔增加,直到达到最大大小。

    DIRTY CACHELINE DUPLICATION
    33.
    发明申请

    公开(公告)号:US20140173379A1

    公开(公告)日:2014-06-19

    申请号:US13720536

    申请日:2012-12-19

    CPC classification number: G06F11/1064 G06F12/0893

    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.

    Abstract translation: 管理存储器的方法包括在高速缓冲存储器中的第一位置安装第一高速缓存线并接收写入请求。 响应于写入请求,第一个缓存线根据写入请求进行修改并标记为脏。 还响应于写入请求,安装第二高速缓存线,该第二高速缓存线在高速缓冲存储器的第二位置处复制根据写入请求修改的第一高速缓存线。

    Memory hierarchy using page-based compression

    公开(公告)号:US11132300B2

    公开(公告)日:2021-09-28

    申请号:US13939380

    申请日:2013-07-11

    Abstract: A system includes a device coupleable to a first memory. The device includes a second memory to cache data from the first memory. The second memory is to store a set of compressed pages of the first memory and a set of page descriptors. Each compressed page includes a set of compressed data blocks. Each page descriptor represents a corresponding page and includes a set of location identifiers that identify the locations of the compressed data blocks of the corresponding page in the second memory. The device further includes compression logic to compress data blocks of a page to be stored to the second memory and decompression logic to decompress compressed data blocks of a page accessed from the second memory.

    Data distribution among multiple managed memories

    公开(公告)号:US09875195B2

    公开(公告)日:2018-01-23

    申请号:US14459958

    申请日:2014-08-14

    CPC classification number: G06F13/1657 G06F13/1647

    Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.

    Scheduling memory accesses using an efficient row burst value
    38.
    发明授权
    Scheduling memory accesses using an efficient row burst value 有权
    使用有效的行突发值调度存储器访问

    公开(公告)号:US09489321B2

    公开(公告)日:2016-11-08

    申请号:US13917033

    申请日:2013-06-13

    CPC classification number: G06F13/1626 G06F13/161 G06F13/1694

    Abstract: A memory accessing agent includes a memory access generating circuit and a memory controller. The memory access generating circuit is adapted to generate multiple memory accesses in a first ordered arrangement. The memory controller is coupled to the memory access generating circuit and has an output port, for providing the multiple memory accesses to the output port in a second ordered arrangement based on the memory accesses and characteristics of an external memory. The memory controller determines the second ordered arrangement by calculating an efficient row burst value and interrupting multiple row-hit requests to schedule a row-miss request based on the efficient row burst value.

    Abstract translation: 存储器访问代理包括存储器访问生成电路和存储器控制器。 存储器访问生成电路适于以第一有序布置生成多个存储器访问。 存储器控制器耦合到存储器存取产生电路,并且具有输出端口,用于基于存储器访问和外部存储器的特性以第二有序布置提供对输出端口的多个存储器访问。 存储器控制器通过计算有效的行脉冲串值和中断多个行命中请求来基于有效的行脉冲串值来调度行错请求来确定第二排序。

    DATA DISTRIBUTION AMONG MULTIPLE MANAGED MEMORIES
    39.
    发明申请
    DATA DISTRIBUTION AMONG MULTIPLE MANAGED MEMORIES 有权
    数据分配在多个管理的记忆

    公开(公告)号:US20160048327A1

    公开(公告)日:2016-02-18

    申请号:US14459958

    申请日:2014-08-14

    CPC classification number: G06F13/1657 G06F13/1647

    Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.

    Abstract translation: 公开了一种用于管理具有多个存储器件的系统中的存储器交错模式的系统和方法。 该系统包括被配置为访问多个存储器设备的处理器。 该方法包括:接收第一多个数据块,然后使用交织模式存储第一多个数据块,其中第一多个数据块的连续块存储在每个存储器件中。 该方法还包括接收第二多个数据块,然后将第二多个数据块的连续块存储在多个存储器件的第一存储器件中。

    Processing engine for complex atomic operations
    40.
    发明授权
    Processing engine for complex atomic operations 有权
    用于复杂原子操作的处理引擎

    公开(公告)号:US09218204B2

    公开(公告)日:2015-12-22

    申请号:US13725724

    申请日:2012-12-21

    CPC classification number: G06F9/50 G06F9/526 G06F2209/521 G06F2209/522

    Abstract: A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores.

    Abstract translation: 系统包括耦合到互连的原子处理引擎(APE)。 互连将耦合到一个或多个处理器内核。 APE通过互连从一个或多个处理器核接收多个命令。 响应于第一命令,APE执行与第一命令相关联的第一多个操作。 第一组多个操作引用多个存储器位置,其中至少一个在一个或多个处理器核心执行的两个或多个线程之间共享。

Patent Agency Ranking