Multicore Bus Architecture With Wire Reduction and Physical Congestion Minimization Via Shared Transaction Channels
    31.
    发明申请
    Multicore Bus Architecture With Wire Reduction and Physical Congestion Minimization Via Shared Transaction Channels 审中-公开
    通过共享事务通道减少线路和物理拥塞最小化的多核总线架构

    公开(公告)号:US20160124890A1

    公开(公告)日:2016-05-05

    申请号:US14530266

    申请日:2014-10-31

    CPC classification number: G06F13/4252 G06F13/362

    Abstract: The Multicore Bus Architecture (MBA) protocol includes a novel technique of sharing the same physical channel for all transaction types. Two channels, the Transaction Attribute Channel (TAC) and the Transaction Data Channel (TDC) are used. The attribute channel transmits bus transaction attribute information optionally including a transaction type signal, a transaction ID, a valid signal, a bus agent ID signal, an address signal, a transaction size signal, a credit spend signal and a credit return signal. The data channel connected a data subset of the signal lines of the bus separate from the attribute subset of signal lines the bus. The data channel optionally transmits a data valid signal, a transaction ID signal, a bus agent ID signal and a last data signal to mark the last data of a current bus transaction.

    Abstract translation: 多核总线架构(MBA)协议包括一种为所有事务类型共享相同物理通道的新技术。 使用两个通道,交易属性通道(TAC)和交易数据通道(TDC)。 属性信道发送可选地包括交易类型信号,交易ID,有效信号,总线代理ID信号,地址信号,交易大小信号,信用支出信号和信用回报信号的总线交易属性信息。 数据通道连接总线信号线的数据子集,与总线信号线的属性子集分开。 数据信道可选地发送数据有效信号,事务ID信号,总线代理ID信号和最后数据信号,以标记当前总线事务的最后数据。

    OPTIMUM CACHE ACCESS SCHEME FOR MULTI ENDPOINT ATOMIC ACCESS IN A MULTICORE SYSTEM
    33.
    发明申请
    OPTIMUM CACHE ACCESS SCHEME FOR MULTI ENDPOINT ATOMIC ACCESS IN A MULTICORE SYSTEM 有权
    用于多重系统中多端点原子访问的最佳缓存访问方案

    公开(公告)号:US20140115265A1

    公开(公告)日:2014-04-24

    申请号:US14061494

    申请日:2013-10-23

    Abstract: The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. The invention unifies all transaction sizes belonging to a slave previous to arbitrating the transactions in order to reduce the complexity of the arbitration process and to provide optimum bandwidth management among all masters. The two consecutive slots assigned per cache line access are always in the same direction for maximum access rate.

    Abstract translation: 描述的MSMC(多核共享存储器控制器)是一种旨在管理多处理器内核,其他母盘外设或DMA之间流量的模块,以及多核SoC中的EMIF(外部存储器间隔)。 本发明在仲裁事务之前统一属于从属方的所有事务大小,以便降低仲裁过程的复杂性,并在所有主机之间提供最佳的带宽管理。 每个高速缓存行访问分配的两个连续插槽总是处于相同的方向,以获得最大访问速率。

    BUS ARCHITECTURE WITH TRANSACTION CREDIT SYSTEM

    公开(公告)号:US20250045230A1

    公开(公告)日:2025-02-06

    申请号:US18814700

    申请日:2024-08-26

    Abstract: This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.

    Slot/sub-slot prefetch architecture for multiple memory requestors

    公开(公告)号:US11074190B2

    公开(公告)日:2021-07-27

    申请号:US16552418

    申请日:2019-08-27

    Abstract: A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.

Patent Agency Ranking