Specialized memory disambiguation mechanisms for different memory read access types
    131.
    发明授权
    Specialized memory disambiguation mechanisms for different memory read access types 有权
    针对不同内存读取访问类型的专门的内存消歧机制

    公开(公告)号:US09524164B2

    公开(公告)日:2016-12-20

    申请号:US14015282

    申请日:2013-08-30

    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.

    Abstract translation: 一种用于有效预测和处理内存访问依赖关系的系统和方法。 计算系统包括控制逻辑,其将检测到的加载指令标记为响应于预测加载指令具有高局部性并且是存储到加载(STL)数据转发的候选者的第一类型。 控制逻辑将检测到的加载指令标记为响应于预测加载指令具有低局部性而不是STL数据转发的候选的第二类型。 控制逻辑处理标记为第一类型的加载指令,就像加载指令取决于较旧的存储操作一样。 控制逻辑处理标记为第二类型的加载指令,就像加载指令独立于任何较旧的存储操作一样。

    CONTROL OF THERMAL ENERGY TRANSFER FOR PHASE CHANGE MATERIAL IN DATA CENTER
    132.
    发明申请
    CONTROL OF THERMAL ENERGY TRANSFER FOR PHASE CHANGE MATERIAL IN DATA CENTER 审中-公开
    数据中心相变材料热能转移控制

    公开(公告)号:US20160338230A1

    公开(公告)日:2016-11-17

    申请号:US14709655

    申请日:2015-05-12

    CPC classification number: H05K7/20809 H05K5/0213

    Abstract: A cooling system controller for a set of computing resources of a data center includes a first interface to couple to a first flow controller that controls a rate of thermal energy transfer to a PCM store from the set of computing resources, a second interface to couple to a second flow controller that controls a rate of thermal energy transfer from the PCM store to a cooling system, and a controller to determine a current set of operational parameters for the data center and to manipulate the first and second flow controllers and via the first and second interfaces to control a net thermal energy transfer to and from the PCM store based on the current set of parameters.

    Abstract translation: 用于数据中心的一组计算资源的冷却系统控制器包括耦合到第一流控制器的第一接口,该第一接口控制从该组计算资源到PCM存储器的热能传输速率,第二接口耦合到 第二流量控制器,其控制从PCM存储器到冷却系统的热能传递速率;以及控制器,用于确定数据中心的当前操作参数集合,并且操纵第一和第二流量控制器,并经由第一和第二流控制器 第二接口,用于基于当前的参数集来控制到/或从PCM存储器的净热能传递。

    Ring networks for intra- and inter-memory I/O including 3D-stacked memories
    133.
    发明授权
    Ring networks for intra- and inter-memory I/O including 3D-stacked memories 有权
    用于内部和内部存储器I / O的环形网络,包括3D堆叠存储器

    公开(公告)号:US09443561B1

    公开(公告)日:2016-09-13

    申请号:US14719200

    申请日:2015-05-21

    Abstract: Embodiments are described for a communications interconnect scheme for 3D stacked memory devices. A ring network design is used for networks of memory chips organized as individual devices with multiple dies or wafers. The design comprises a three-tier ring network where each ring serves a different set of memory blocks. One ring or set of rings interconnects memory within a die (inter-bank), a second ring or set of rings interconnects memory across die in a stack (inter-die), and the third ring or set of rings interconnects memory across stacks or chip packages (inter-stack).

    Abstract translation: 针对3D堆叠存储器件的通信互连方案描述了实施例。 环形网络设计用于组织为具有多个管芯或晶片的单独器件的存储器芯片的网络。 该设计包括三层环网,其中每个环服务不同的存储块集合。 一个环或一组环将管芯内的存储器互相互连(第二环或一组环)在堆叠(管芯间)中跨芯片互连存储器,并且第三环或一组环将堆叠互连存储器 芯片封装(堆叠)。

    Using a linear prediction to configure an idle state of an entity in a computing device
    134.
    发明授权
    Using a linear prediction to configure an idle state of an entity in a computing device 有权
    使用线性预测来配置计算设备中的实体的空闲状态

    公开(公告)号:US09442557B2

    公开(公告)日:2016-09-13

    申请号:US14075645

    申请日:2013-11-08

    CPC classification number: G06F1/3234 G06F1/206

    Abstract: The described embodiments include a computing device with one or more entities (processor cores, processors, etc.). In some embodiments, during operation, a thermal power management unit in the computing device uses a linear prediction to compute a predicted duration of a next idle period for an entity based on the duration of one or more previous idle periods for the entity. Based on the predicted duration of the next idle period, the thermal power management unit configures the entity to operate in a corresponding idle state.

    Abstract translation: 所描述的实施例包括具有一个或多个实体(处理器核心,处理器等)的计算设备。 在一些实施例中,在操作期间,计算设备中的热功率管理单元使用线性预测来基于实体的一个或多个先前空闲周期的持续时间来计算实体的下一个空闲周期的预测持续时间。 基于下一个空闲周期的预测持续时间,热功率管理单元将实体配置为在相应的空闲状态下工作。

    Early write-back of modified data in a cache memory
    135.
    发明授权
    Early write-back of modified data in a cache memory 有权
    将缓存中的修改数据提前回写

    公开(公告)号:US09378153B2

    公开(公告)日:2016-06-28

    申请号:US14011616

    申请日:2013-08-27

    CPC classification number: G06F12/127 G06F12/0804 G06F12/123 Y02D10/13

    Abstract: A level of cache memory receives modified data from a higher level of cache memory. A set of cache lines with an index associated with the modified data is identified. The modified data is stored in the set in a cache line with an eviction priority that is at least as high as an eviction priority, before the modified data is stored, of an unmodified cache line with a highest eviction priority among unmodified cache lines in the set.

    Abstract translation: 一级高速缓冲存储器从更高级别的缓存存储器接收修改的数据。 识别具有与修改的数据相关联的索引的一组高速缓存行。 修改后的数据被存储在高速缓存行中,其具有在修改数据被存储之前至少与驱逐优先级一样高的驱逐优先级,该缓存优先级在未修改的高速缓存行中具有最高驱逐优先级的未修改高速缓存行 组。

    SCHEDULING APPLICATIONS IN PROCESSING DEVICES BASED ON PREDICTED THERMAL IMPACT
    136.
    发明申请
    SCHEDULING APPLICATIONS IN PROCESSING DEVICES BASED ON PREDICTED THERMAL IMPACT 审中-公开
    基于预测热影响的处理设备中的调度应用

    公开(公告)号:US20160085219A1

    公开(公告)日:2016-03-24

    申请号:US14493189

    申请日:2014-09-22

    CPC classification number: G06N5/04 G06F1/206 G06F1/329 G06F9/4893 Y02D10/24

    Abstract: A processing device includes a plurality of components and a system management unit to selectively schedule an application phase to one of the plurality of components based on one or more comparisons of predictions of a plurality of thermal impacts of executing the application phase on each of the plurality of components. The predictions may be generated based on a thermal history associated with the application phase, thermal sensitivities of the plurality of components, or a layout of the plurality of components in the processing device.

    Abstract translation: 一种处理设备包括多个组件和系统管理单元,用于基于对多个组件中的每一个上执行应用阶段的多个热冲击的预测的一个或多个比较来选择性地将应用阶段调度到多个组件之一 的组件。 可以基于与应用阶段相关联的热历史,多个组件的热敏感性或处理设备中的多个组件的布局来生成预测。

    VIRTUAL MEMORY MAPPING FOR IMPROVED DRAM PAGE LOCALITY
    137.
    发明申请
    VIRTUAL MEMORY MAPPING FOR IMPROVED DRAM PAGE LOCALITY 有权
    用于改进DRAM页面本地化的虚拟内存映射

    公开(公告)号:US20160049181A1

    公开(公告)日:2016-02-18

    申请号:US14460550

    申请日:2014-08-15

    Abstract: Embodiments are described for methods and systems for mapping virtual memory pages to physical memory pages by analyzing a sequence of memory-bound accesses to the virtual memory pages, determining a degree of contiguity between the accessed virtual memory pages, and mapping sets of the accessed virtual memory pages to respective single physical memory pages. Embodiments are also described for a method for increasing locality of memory accesses to DRAM in virtual memory systems by analyzing a pattern of virtual memory accesses to identify contiguity of accessed virtual memory pages, predicting contiguity of the accessed virtual memory pages based on the pattern, and mapping the identified and predicted contiguous virtual memory pages to respective single physical memory pages.

    Abstract translation: 描述了通过分析对虚拟存储器页面的存储器绑定访问的序列,确定所访问的虚拟存储器页面之间的连续程度以及访问的虚拟存储器页面的映射集合来将虚拟存储器页面映射到物理存储器页面的方法和系统的实施例 存储页面到相应的单个物理存储器页面。 还描述了用于通过分析虚拟存储器访问的模式以识别所访问的虚拟存储器页的邻接性,基于该模式来预测所访问的虚拟存储器页的邻接性的方法来增加虚拟存储器系统中对DRAM的存储器访问的局部性的方法,以及 将所识别的和预测的连续虚拟存储器页面映射到相应的单个物理存储器页面。

    DYNAMIC CACHE PREFETCHING BASED ON POWER GATING AND PREFETCHING POLICIES
    138.
    发明申请
    DYNAMIC CACHE PREFETCHING BASED ON POWER GATING AND PREFETCHING POLICIES 审中-公开
    基于功率增益和预选策略的动态缓存预测

    公开(公告)号:US20160034023A1

    公开(公告)日:2016-02-04

    申请号:US14448096

    申请日:2014-07-31

    Abstract: A system may determine that a processor has powered up. The system may determine a first prefetching policy based on determining that the processor has powered up. The system may fetch information, from a main memory and for storage by a cache associated with the processor, using the first prefetching policy. The system may determine, after fetching information using the first prefetching policy, to apply a second prefetching policy that is different than the first prefetching policy. The system may fetch information, from the main memory and for storage by the cache, using the second prefetching policy.

    Abstract translation: 系统可以确定处理器已经通电。 该系统可以基于确定处理器通电来确定第一预取策略。 系统可以使用第一预取策略从主存储器获取信息,并且由与处理器相关联的高速缓存存储信息。 在使用第一预取策略获取信息之后,系统可以确定应用与第一预取策略不同的第二预取策略。 系统可以使用第二预取策略从主存储器获取信息并由高速缓存存储。

    Stack cache management and coherence techniques
    139.
    发明授权
    Stack cache management and coherence techniques 有权
    堆栈缓存管理和一致性技术

    公开(公告)号:US09189399B2

    公开(公告)日:2015-11-17

    申请号:US13887196

    申请日:2013-05-03

    Abstract: A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data.

    Abstract translation: 这里呈现的处理器系统具有多个执行内核和多个堆栈高速缓存,其中每个堆栈高速缓存与不同的执行核心相关联。 此处介绍了处理器系统的堆栈数据管理方法。 该方法维护多个执行核心的堆栈高速缓存管理器。 堆栈缓存管理器包括由多个执行核心访问的堆栈数据的条目。 该方法对于多个执行核心的请求执行核心处理所请求的堆栈数据的虚拟地址。 该方法通过访问堆栈高速缓存管理器来继续搜索包括所请求的堆栈数据的虚拟地址的堆栈高速缓存管理器的条目,并使用条目中的信息来检索所请求的堆栈数据。

    POWER GATING BASED ON CACHE DIRTINESS
    140.
    发明申请
    POWER GATING BASED ON CACHE DIRTINESS 有权
    基于CACHE DIRTINESS的功率增益

    公开(公告)号:US20150185801A1

    公开(公告)日:2015-07-02

    申请号:US14146591

    申请日:2014-01-02

    CPC classification number: G06F1/3287 G06F1/3225 Y02D10/171 Y02D50/20

    Abstract: Power gating decisions can be made based on measures of cache dirtiness. Analyzer logic can selectively power gate a component of a processor system based on a cache dirtiness of one or more caches associated with the component. The analyzer logic may power gate the component when the cache dirtiness exceeds a threshold and may maintains the component in an idle state when the cache dirtiness does not exceed the threshold. Idle time prediction logic may be used to predict a duration of an idle time of the component. The analyzer logic may then selectively power gates the component based on the cache dirtiness and the predicted idle time.

    Abstract translation: 电源选通决定可以基于缓存污垢的测量。 分析器逻辑可以基于与组件相关联的一个或多个高速缓存的高速缓存污垢来选择性地加电处理系统的组件。 当高速缓存污物超过阈值时,分析器逻辑可以对组件供电,并且当高速缓存污垢不超过阈值时,分析器逻辑可以维持组件处于空闲状态。 空闲时间预测逻辑可以用于预测组件的空闲时间的持续时间。 然后,分析器逻辑可以基于高速缓存污物和预测的空闲时间选择性地对组件进行加电。

Patent Agency Ranking