Parity data management for a memory architecture
    41.
    发明授权
    Parity data management for a memory architecture 有权
    存储器架构的奇偶校验数据管理

    公开(公告)号:US09106260B2

    公开(公告)日:2015-08-11

    申请号:US13720504

    申请日:2012-12-19

    CPC classification number: H03M13/11 G06F11/00 G06F11/1048 H03M13/05

    Abstract: A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels. In some implementations, the system memory is realized as a die-stacked memory component.

    Abstract translation: 如本文所述的处理器系统包括处理器核心,耦合到处理器核心的高速缓存存储器,耦合到高速缓冲存储器的存储器控​​制器以及耦合到存储器控制器的系统存储器组件。 系统存储器组件包括被配置为存储数据块的多个独立存储器通道,其中存储器控制器控制在多个独立存储器通道中的至少一个中存储奇偶校验位。 在一些实现中,系统存储器被实现为管芯堆叠的存储器组件。

    PARTITIONABLE DATA BUS
    42.
    发明申请
    PARTITIONABLE DATA BUS 有权
    可分区数据总线

    公开(公告)号:US20150026511A1

    公开(公告)日:2015-01-22

    申请号:US14016610

    申请日:2013-09-03

    CPC classification number: G06F11/0727 G06F11/2007 G06F2201/85

    Abstract: A method and a system are provided for partitioning a system data bus. The method can include partitioning off a portion of a system data bus that includes one or more faulty bits to form a partitioned data bus. Further, the method includes transferring data over the partitioned data bus to compensate for data loss due to the one or more faulty bits in the system data bus.

    Abstract translation: 提供了一种分区系统数据总线的方法和系统。 该方法可以包括分离包括一个或多个故障位的系统数据总线的一部分以形成分区数据总线。 此外,该方法包括在分区数据总线上传送数据以补偿由于系统数据总线中的一个或多个错误位导致的数据丢失。

    MEMORY HIERARCHY USING ROW-BASED COMPRESSION
    43.
    发明申请
    MEMORY HIERARCHY USING ROW-BASED COMPRESSION 有权
    使用基于ROW的压缩的内存分层

    公开(公告)号:US20150019813A1

    公开(公告)日:2015-01-15

    申请号:US13939377

    申请日:2013-07-11

    Abstract: A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

    Abstract translation: 系统包括第一存储器和可耦合到第一存储器的装置。 该设备包括用于缓存来自第一存储器的数据的第二存储器。 第二存储器包括多行,每行包括对应的一组非均匀尺寸的压缩数据块和相应的一组标签块。 每个标签块表示该行的对应的压缩数据块。 该设备还包括解压缩逻辑以解压缩从第二存储器访问的数据块。 该设备还包括压缩逻辑以压缩要存储在第二存储器中的数据块。

    PARITY DATA MANAGEMENT FOR A MEMORY ARCHITECTURE
    44.
    发明申请
    PARITY DATA MANAGEMENT FOR A MEMORY ARCHITECTURE 有权
    用于存储器架构的奇偶性数据管理

    公开(公告)号:US20140173378A1

    公开(公告)日:2014-06-19

    申请号:US13720504

    申请日:2012-12-19

    CPC classification number: H03M13/11 G06F11/00 G06F11/1048 H03M13/05

    Abstract: A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels. In some implementations, the system memory is realized as a die-stacked memory component.

    Abstract translation: 如本文所述的处理器系统包括处理器核心,耦合到处理器核心的高速缓存存储器,耦合到高速缓冲存储器的存储器控​​制器以及耦合到存储器控制器的系统存储器组件。 系统存储器组件包括被配置为存储数据块的多个独立存储器通道,其中存储器控制器控制在多个独立存储器通道中的至少一个中存储奇偶校验位。 在一些实现中,系统存储器被实现为管芯堆叠的存储器组件。

    Tracking Non-Native Content in Caches
    45.
    发明申请
    Tracking Non-Native Content in Caches 审中-公开
    跟踪缓存中的非本地内容

    公开(公告)号:US20140156941A1

    公开(公告)日:2014-06-05

    申请号:US13691375

    申请日:2012-11-30

    Abstract: The described embodiments include a cache with a plurality of banks that includes a cache controller. In these embodiments, the cache controller determines a value representing non-native cache blocks stored in at least one bank in the cache, wherein a cache block is non-native to a bank when a home for the cache block is in a predetermined location relative to the bank. Then, based on the value representing non-native cache blocks stored in the at least one bank, the cache controller determines at least one bank in the cache to be transitioned from a first power mode to a second power mode. Next, the cache controller transitions the determined at least one bank in the cache from the first power mode to the second power mode.

    Abstract translation: 所描述的实施例包括具有包括高速缓存控制器的多个存储体的高速缓存。 在这些实施例中,高速缓存控制器确定表示存储在高速缓存中的至少一个存储区中的非本机高速缓存块的值,其中当高速缓存块的归属位于相对于预定位置时,高速缓存块对于存储体是非本地的 去银行。 然后,高速缓存控制器基于代表存储在至少一个存储体中的非本地高速缓存块的值,确定高速缓存中的至少一个存储体将从第一功率模式转换到第二功率模式。 接下来,高速缓存控制器将所确定的高速缓存中的至少一个存储体从第一功率模式转换到第二功率模式。

    Using Predictions for Store-to-Load Forwarding
    47.
    发明申请
    Using Predictions for Store-to-Load Forwarding 有权
    使用存储到负载转发的预测

    公开(公告)号:US20140143492A1

    公开(公告)日:2014-05-22

    申请号:US14018562

    申请日:2013-09-05

    Abstract: The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load.

    Abstract translation: 所描述的实施例包括使用对存储到负载转发的预测的核心。 在所描述的实施例中,核心包括加载存储单元,存储缓冲器和预测机制。 在运行期间,预测机制产生一个预测,即使用从存储缓冲器转发的数据来满足负载,因为负载从栈中的存储器位置加载数据。 基于该预测,加载存储单元首先向存储缓冲器发送对数据的请求,以尝试使用从存储缓冲器转发的数据来满足负载。 如果从存储缓冲区返回数据,则使用该数据来满足负载。 然而,如果使用从存储缓冲器转发的数据来满足负载的尝试不成功,则加载存储单元然后分别向缓存发送用于满足负载的数据请求。

    Page migration in a 3D stacked hybrid memory
    49.
    发明授权
    Page migration in a 3D stacked hybrid memory 有权
    3D堆叠混合内存中的页面迁移

    公开(公告)号:US09535831B2

    公开(公告)日:2017-01-03

    申请号:US14152003

    申请日:2014-01-10

    Abstract: A die-stacked hybrid memory device implements a first set of one or more memory dies implementing first memory cell circuitry of a first memory architecture type and a second set of one or more memory dies implementing second memory cell circuitry of a second memory architecture type different than the first memory architecture type. The die-stacked hybrid memory device further includes a set of one or more logic dies electrically coupled to the first and second sets of one or more memory dies, the set of one or more logic dies comprising a memory interface and a page migration manager, the memory interface coupleable to a device external to the die-stacked hybrid memory device, and the page migration manager to transfer memory pages between the first set of one or more memory dies and the second set of one or more memory dies.

    Abstract translation: 芯片堆叠的混合存储器设备实现实现第一存储器架构类型的第一存储器单元电路的一个或多个存储器管芯的第一组和执行第二存储器架构类型不同的第二存储器单元电路的一个或多个存储器管芯的第二组 比第一个内存架构类型。 芯片堆叠式混合存储器件还包括电耦合到一个或多个存储器管芯的第一和第二组的一个或多个逻辑管芯组,该组一个或多个逻辑管芯包括存储器接口和页迁移管理器, 所述存储器接口可耦合到所述管芯堆叠式混合存储器件外部的器件,以及所述页面迁移管理器,用于在所述第一组一个或多个存储器管芯与所述第二组一个或多个存储器管芯之间传送存储器页。

    Partitionable data bus
    50.
    发明授权
    Partitionable data bus 有权
    可分区数据总线

    公开(公告)号:US09454419B2

    公开(公告)日:2016-09-27

    申请号:US14016610

    申请日:2013-09-03

    CPC classification number: G06F11/0727 G06F11/2007 G06F2201/85

    Abstract: A method and a system are provided for partitioning a system data bus. The method can include partitioning off a portion of a system data bus that includes one or more faulty bits to form a partitioned data bus. Further, the method includes transferring data over the partitioned data bus to compensate for data loss due to the one or more faulty bits in the system data bus.

    Abstract translation: 提供了一种分区系统数据总线的方法和系统。 该方法可以包括分离包括一个或多个故障位的系统数据总线的一部分以形成分区数据总线。 此外,该方法包括在分区数据总线上传送数据以补偿由于系统数据总线中的一个或多个错误位导致的数据丢失。

Patent Agency Ranking