Data distribution among multiple managed memories

    公开(公告)号:US09875195B2

    公开(公告)日:2018-01-23

    申请号:US14459958

    申请日:2014-08-14

    CPC classification number: G06F13/1657 G06F13/1647

    Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.

    DATA DISTRIBUTION AMONG MULTIPLE MANAGED MEMORIES
    13.
    发明申请
    DATA DISTRIBUTION AMONG MULTIPLE MANAGED MEMORIES 有权
    数据分配在多个管理的记忆

    公开(公告)号:US20160048327A1

    公开(公告)日:2016-02-18

    申请号:US14459958

    申请日:2014-08-14

    CPC classification number: G06F13/1657 G06F13/1647

    Abstract: A system and method are disclosed for managing memory interleaving patterns in a system with multiple memory devices. The system includes a processor configured to access multiple memory devices. The method includes receiving a first plurality of data blocks, and then storing the first plurality of data blocks using an interleaving pattern in which successive blocks of the first plurality of data blocks are stored in each of the memory devices. The method also includes receiving a second plurality of data blocks, and then storing successive blocks of the second plurality of data blocks in a first memory device of the multiple memory devices.

    Abstract translation: 公开了一种用于管理具有多个存储器件的系统中的存储器交错模式的系统和方法。 该系统包括被配置为访问多个存储器设备的处理器。 该方法包括:接收第一多个数据块,然后使用交织模式存储第一多个数据块,其中第一多个数据块的连续块存储在每个存储器件中。 该方法还包括接收第二多个数据块,然后将第二多个数据块的连续块存储在多个存储器件的第一存储器件中。

    CACHE COHERENCY USING DIE-STACKED MEMORY DEVICE WITH LOGIC DIE
    14.
    发明申请
    CACHE COHERENCY USING DIE-STACKED MEMORY DEVICE WITH LOGIC DIE 有权
    使用带LOGO DIE的堆叠式存储器设备进行高速缓存

    公开(公告)号:US20140181417A1

    公开(公告)日:2014-06-26

    申请号:US13726146

    申请日:2012-12-23

    Abstract: A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic dies implement hardware logic providing a memory interface and the coherency manager. The memory interface operates to perform memory accesses in response to memory access requests from the coherency manager and the one or more external devices. The coherency manager comprises logic to perform coherency operations for shared data stored at the stacked memory dies. Due to the integration of the logic dies and the memory dies, the coherency manager can access shared data stored in the memory dies and perform related coherency operations with higher bandwidth and lower latency and power consumption compared to the external devices.

    Abstract translation: 堆叠堆叠的存储器件实现集成的一致性管理器以卸载处理系统的设备的高速缓存一致性协议操作。 芯片堆叠的存储器件包括一组一个或多个堆叠的存储器管芯和一组一个或多个逻辑管芯。 一个或多个逻辑模块实现提供存储器接口和一致性管理器的硬件逻辑。 存储器接口操作以响应来自一致性管理器和一个或多个外部设备的存储器访问请求来执行存储器访问。 相关性管理器包括对存储在堆叠存储器管芯上的共享数据执行一致性操作的逻辑。 由于逻辑管芯和存储器管芯的集成,一致性管理器可以访问存储在存储器管芯中的共享数据,并且与外部器件相比具有更高带宽和更低的延迟和功耗的相关一致性操作。

    Management of caches
    17.
    发明授权

    公开(公告)号:US09251081B2

    公开(公告)日:2016-02-02

    申请号:US13957105

    申请日:2013-08-01

    CPC classification number: G06F12/0848 G06F12/122 Y02D10/13

    Abstract: A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory.

    Selecting a Resource from a Set of Resources for Performing an Operation
    18.
    发明申请
    Selecting a Resource from a Set of Resources for Performing an Operation 有权
    从一组用于执行操作的资源中选择资源

    公开(公告)号:US20140223445A1

    公开(公告)日:2014-08-07

    申请号:US13761985

    申请日:2013-02-07

    CPC classification number: G06F9/5016 G06F9/5011 G06F12/0875 G06F2212/45

    Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation.

    Abstract translation: 所描述的实施例包括从用于执行操作的计算设备中的一组资源中选择资源的选择机制。 在一些实施例中,选择机制被配置为在从一组表中选择的表中执行查找,以从资源集合中识别资源。 当所识别的资源不可用于执行操作并且直到选择资源来执行操作时,选择机制被配置为识别表中的下一个资源,并且当下一个资源可用时选择用于执行操作的下一个资源 用于执行操作。

    QUALITY OF SERVICE SUPPORT USING STACKED MEMORY DEVICE WITH LOGIC DIE
    19.
    发明申请
    QUALITY OF SERVICE SUPPORT USING STACKED MEMORY DEVICE WITH LOGIC DIE 有权
    使用带LOGO DIE的堆叠存储器设备的服务质量支持

    公开(公告)号:US20140181428A1

    公开(公告)日:2014-06-26

    申请号:US13726144

    申请日:2012-12-23

    Abstract: A die-stacked memory device implements an integrated QoS manager to provide centralized QoS functionality in furtherance of one or more specified QoS objectives for the sharing of the memory resources by other components of the processing system. The die-stacked memory device includes a set of one or more stacked memory dies and one or more logic dies. The logic dies implement hardware logic for a memory controller and the QoS manager. The memory controller is coupleable to one or more devices external to the set of one or more stacked memory dies and operates to service memory access requests from the one or more external devices. The QoS manager comprises logic to perform operations in furtherance of one or more QoS objectives, which may be specified by a user, by an operating system, hypervisor, job management software, or other application being executed, or specified via hardcoded logic or firmware.

    Abstract translation: 堆叠堆叠的存储器件实现集成的QoS管理器以提供集中的QoS功能,以促进一个或多个指定的QoS目标,以便由处理系统的其他组件共享存储器资源。 芯片堆叠的存储器件包括一组一个或多个堆叠的存储器管芯和一个或多个逻辑管芯。 逻辑模块为存储器控制器和QoS管理器实现硬件逻辑。 存储器控制器可耦合到一个或多个堆叠的存储器管芯组的外部的一个或多个器件,并且操作以从一个或多个外部器件服务存储器访问请求。 QoS管理器包括用于执行可由操作系统,管理程序,作业管理软件或正在执行的其他应用或经由硬编码逻辑或固件指定的一个或多个QoS目标的操作的逻辑。

    REDUCING COLD TLB MISSES IN A HETEROGENEOUS COMPUTING SYSTEM
    20.
    发明申请
    REDUCING COLD TLB MISSES IN A HETEROGENEOUS COMPUTING SYSTEM 审中-公开
    减少异构计算系统中的冷TLB缺陷

    公开(公告)号:US20140101405A1

    公开(公告)日:2014-04-10

    申请号:US13645685

    申请日:2012-10-05

    Abstract: Methods and apparatuses are provided for avoiding cold translation lookaside buffer (TLB) misses in a computer system. A typical system is configured as a heterogeneous computing system having at least one central processing unit (CPU) and one or more graphic processing units (GPUs) that share a common memory address space. Each processing unit (CPU and GPU) has an independent TLB. When offloading a task from a particular CPU to a particular GPU, translation information is sent along with the task assignment. The translation information allows the GPU to load the address translation data into the TLB associated with the one or more GPUs prior to executing the task. Preloading the TLB of the GPUs reduces or avoids cold TLB misses that could otherwise occur without the benefits offered by the present disclosure.

    Abstract translation: 提供了用于避免计算机系统中冷翻译后备缓冲器(TLB)未命中的方法和装置。 典型的系统被配置为具有至少一个中央处理单元(CPU)和共享公共存储器地址空间的一个或多个图形处理单元(GPU)的异构计算系统。 每个处理单元(CPU和GPU)都有独立的TLB。 当将任务从特定CPU卸载到特定GPU时,将随任务分配一起发送翻译信息。 翻译信息允许GPU在执行任务之前将地址转换数据加载到与一个或多个GPU相关联的TLB中。 GPU的预加载减少或避免了在没有本公开提供的优点的情况下可能发生的冷TLB未命中。

Patent Agency Ranking