TRANSACTION ELIMINATION USING METADATA
    12.
    发明申请

    公开(公告)号:US20180253258A1

    公开(公告)日:2018-09-06

    申请号:US15448203

    申请日:2017-03-02

    Abstract: Various aspects are described herein. In some aspects, the present disclosure provides a method of communicating data between an electronic unit of a system-on-chip (SoC) and a dynamic random access memory (DRAM). The method includes initiating a memory transaction corresponding to first data. The method includes determining a non-unique first signature and a unique second signature associated with the first data based on content of the first data. The method includes determining if the non-unique first signature is stored in at least one of a local buffer on the SoC separate from the DRAM or the DRAM. The method includes determining if the unique second signature is stored in at least one of the local buffer or the DRAM based on determining the non-unique first signature is stored. The method includes eliminating the memory transaction with respect to the DRAM based on determining the unique second signature is stored.

    CONFIGURABLE SPREADING FUNCTION FOR MEMORY INTERLEAVING
    13.
    发明申请
    CONFIGURABLE SPREADING FUNCTION FOR MEMORY INTERLEAVING 有权
    用于存储器交互的可配置扩展功能

    公开(公告)号:US20150095595A1

    公开(公告)日:2015-04-02

    申请号:US14251626

    申请日:2014-04-13

    CPC classification number: G06F12/0607 G06F2212/1016

    Abstract: A method of interleaving a memory by mapping address bits of the memory to a number N of memory channels iteratively in successive rounds, wherein in each round except the last round: selecting a unique subset of address bits, determining a maximum number (L) of unique combinations possible based on the selected subset of address bits, mapping combinations to the N memory channels a maximum number of times (F) possible where each of the N memory channels gets mapped to an equal number of combinations, and if and when a number of combinations remain (K, which is less than N) that cannot be mapped, one to each of the N memory channels, entering a next round. In the last round, mapping remaining most significant address bits, not used in the subsets in prior rounds, to each of the N memory channels.

    Abstract translation: 一种通过在连续循环中迭代地将存储器的地址位映射到数量N个存储器通道来交织存储器的方法,其中在除了最后一轮之外的每个循环中:选择唯一的地址位子集,确定最大数目(L) 基于所选择的地址位子集可能的唯一组合,将N个存储器通道中的每一个映射到相等数量的组合的最大次数(F)映射到N个存储器通道,以及如果和何时一个数字 的组合保持(K小于N),N个存储器通道中的每一个输入下一轮。 在最后一轮中,将剩余的最高有效地址位映射到N个存储器通道中的每一个。

    PROCESSOR SYSTEM INCLUDING TABLE-BASED MEMORY PROTECTION FOR IMPROVED PERFORMANCE FOR SHARED MEMORY

    公开(公告)号:US20240273030A1

    公开(公告)日:2024-08-15

    申请号:US18399525

    申请日:2023-12-28

    CPC classification number: G06F12/0877 G06F12/0822

    Abstract: A processor system, comprising: a first memory comprising a memory protection table including memory access permission information associated with a set of one or more worlds; and a processor comprising: an execution core configured to run a first world; and a table-based memory protection (TMP) configured to: receive a first request to access memory content at a first target address from the first world; access the memory access permission information from the memory protection table based on the first target address; and determine whether the first world is allowed to access the memory content at the first target address based on the accessed memory access permission information.

    MATRIX MULTIPLIER IMPLEMENTED TO PERFORM CONCURRENT STORE AND MULTIPLY-ACCUMULATE (MAC) OPERATIONS

    公开(公告)号:US20240169018A1

    公开(公告)日:2024-05-23

    申请号:US17989448

    申请日:2022-11-17

    CPC classification number: G06F17/16 G06F7/5443

    Abstract: An apparatus, including: a memory; a matrix multiplier engine, comprising: an array of multiplier-accumulate units (MAUs) comprising: a first set of accumulators; and a second set of accumulators; and a controller configured to concurrently: cause a first set of resultant values in the first set of accumulators to be transferred to the memory pursuant to a first set of store instructions, wherein the first set of resultant values was generated pursuant to a first set of multiply-accumulate (MAC) operations performed by the set of multipliers and the first set of accumulators; and cause the set of multipliers and the second set of accumulators to perform a second set of MAC operations.

    DATA RE-ENCODING FOR ENERGY-EFFICIENT DATA TRANSFER IN A COMPUTING DEVICE

    公开(公告)号:US20230031310A1

    公开(公告)日:2023-02-02

    申请号:US17390215

    申请日:2021-07-30

    Abstract: The energy consumed by data transfer in a computing device may be reduced by transferring data that has been encoded in a manner that reduces the number of one “1” data values, the number of signal level transitions, or both. A data destination component of the computing device may receive data encoded in such a manner from a data source component of the computing device over a data communication interconnect, such as an off-chip interconnect. The data may be encoded using minimum Hamming weight encoding, which reduces the number of one “1” data values. The received data may be decoded using minimum Hamming weight decoding. For other computing devices, the data may be encoded using maximum Hamming weight encoding, which increases the number of one “1” data values while reducing the number of zero “0” values, if reducing the number of zero values reduces energy consumption.

    SYSTEM AND METHOD FOR IMPROVED MEMORY PERFORMANCE USING CACHE LEVEL HASHING

    公开(公告)号:US20170249249A1

    公开(公告)日:2017-08-31

    申请号:US15054295

    申请日:2016-02-26

    Abstract: Various embodiments of methods and systems for cache-level memory management in a system on a chip (“SoC”) are disclosed. Memory utilization is optimized in certain embodiments through application of customized hashing algorithms at the lower level cache of individual application clients. Advantageously, for those application clients that do not require or benefit from hashing transaction traffic their transactions are not subjected to hashing. For those application clients that do benefit from hashing transaction traffic in order to minimize page conflicts at a double data rate (“DDR”) memory device, each client further benefits from a customized, and thus optimized, hashing algorithm. Because transaction streams arrive at the memory controller already hashed, or purposefully unhashed, the need for validating clients during a development phase is minimized.

    SYSTEMS AND METHODS OF MEMORY BIT FLIP IDENTIFICATION FOR DEBUGGING AND POWER MANAGEMENT
    19.
    发明申请
    SYSTEMS AND METHODS OF MEMORY BIT FLIP IDENTIFICATION FOR DEBUGGING AND POWER MANAGEMENT 有权
    用于调试和电源管理的存储器位标识识别系统和方法

    公开(公告)号:US20170046219A1

    公开(公告)日:2017-02-16

    申请号:US14823879

    申请日:2015-08-11

    Abstract: Various embodiments of methods and systems for bit flip identification for debugging and/or power management in a system on a chip (“SoC”) are disclosed. Exemplary embodiments seek to identify bit flip occurrences near in time to the occurrences by checking parity values of data blocks as the data blocks are written into a memory component. In this way, bit flips occurring in association with a write transaction may be differentiated from bit flips occurring in association with a read transaction. The distinction may be useful, when taken in conjunction with various parameter levels identified at the time of a bit flip recognition, to debug a memory component or, when in a runtime environment, adjust thermal and power policies that may be contributing to bit flip occurrences.

    Abstract translation: 公开了用于芯片系统(“SoC”)中的调试和/或功率管理的位翻转识别的方法和系统的各种实施例。 示例性实施例旨在通过在将数据块写入存储器组件时检查数据块的奇偶校验值来识别出现在时间附近的位翻转事件。 以这种方式,与写事务相关联发生的位翻转可以与与读取事务相关联的位翻转区分开。 当与比特翻转识别时识别的各种参数级别结合使用时,区分可能是有用的,以调试存储器组件,或者当在运行时环境中调整可能导致位翻转发生的热和功率策略时 。

Patent Agency Ranking