System bus transaction queue reallocation

    公开(公告)号:US12135658B2

    公开(公告)日:2024-11-05

    申请号:US17644130

    申请日:2021-12-14

    Abstract: A bus architecture is disclosed that provides for transaction queue reallocation on the modules communicating using the bus. A module can implement a transaction request queue by virtue of digital electronic circuitry, e.g., hardware or software or a combination of both. Some bus clogging issues that affect conventional systems can be circumvented by combining an out of order system bus protocol that uses a transaction request replay mechanism. Modules can evict less urgent transactions from transaction request queues to make room to insert more urgent transactions. Master modules can dynamically update a quality of service (QoS) value for a transaction while the transaction is still pending.

    MODULAR ELECTRONIC APPARATUS FOR DISTRIBUTION OF SATELLITE SIGNALS

    公开(公告)号:US20240037050A1

    公开(公告)日:2024-02-01

    申请号:US18279190

    申请日:2022-02-28

    CPC classification number: G06F13/364 H04Q1/136 H04Q2201/04 H04Q2201/10

    Abstract: The application relates to modular electronic apparatus (1) for distribution of RF communication signals. The apparatus comprises a chassis (2) arranged to removably receive plural modules (3), at least some of which are arranged to receive and process RF communication signals. A communication path (17) is provided for modules to communicate with each other and/or with the chassis. Plural modules received in the chassis. When a module is received in the chassis, it is arranged to broadcast a message over the communication path indicating its presence in the chassis and its type. At least one other module is arranged to adapt its behaviour in response to the message.

    HIGHLY SCALABLE ACCELERATOR
    7.
    发明公开

    公开(公告)号:US20230251986A1

    公开(公告)日:2023-08-10

    申请号:US18296875

    申请日:2023-04-06

    CPC classification number: G06F13/364 G06F9/5027 G06F13/24

    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.

    Resource allocation in a multi-processor system

    公开(公告)号:US11714647B2

    公开(公告)日:2023-08-01

    申请号:US17527288

    申请日:2021-11-16

    Abstract: A system includes a memory-mapped register (MMR) associated with a claim logic circuit, a claim field for the MMR, a first firewall for a first address region, and a second firewall for a second address region. The MMR is associated with an address in the first address region and an address in the second address region. The first firewall is configured to pass a first write request for an address in the first address region to the claim logic circuit associated with the MMR. The claim logic circuit associated with the MMR is configured to grant or deny the first write request based on the claim field for the MMR. Further, the second firewall is configured to receive a second write request for an address in the second address region and grant or deny the second write request based on a permission level associated with the second write request.

    Network credit return mechanisms
    9.
    发明授权

    公开(公告)号:US11580044B2

    公开(公告)日:2023-02-14

    申请号:US17007814

    申请日:2020-08-31

    Inventor: Tony Brewer

    Abstract: Implementations of the present disclosure are directed to systems and methods for reducing design complexity and critical path timing challenges of credit return logic. A wide bus supports simultaneous transmission of multiple flits, one per lane of the wide bus. A source device transmitting flits on a wide bus selects from among multiple credit return options to ensure that only one of the multiple flits being simultaneously transmitted includes a credit return value. In some example embodiments, the receiving device checks only the flit of one lane of the wide bus (e.g., lane 0) for credit return data. In other example embodiments, the receiving device uses a bitwise-OR to combine the credit return data of all received flits in a single cycle.

Patent Agency Ranking