REDUCTION OF DATA TRANSFER OVERHEAD

    公开(公告)号:US20250156187A1

    公开(公告)日:2025-05-15

    申请号:US18510088

    申请日:2023-11-15

    Abstract: Aspects of the disclosure are directed to reduction of data transfer overhead. In accordance with one aspect, increment a function call counter for an executed function call; decrement the function call counter for an executed return from function call; infer the indirect branch address based on a function call counter value and when a cumulative count of received executed atoms indicates a return from function call has been executed; oversaturate the function call counter at a maximum counter value if the function call counter contains the maximum counter value and a subsequent function call is executed; and undersaturate the function call counter at a minimum counter value if the function call counter contains the minimum counter value and a subsequent return from function call is executed.

    HYBRID FAST PATH FILTER BRANCH PREDICTOR
    5.
    发明申请

    公开(公告)号:US20190073223A1

    公开(公告)日:2019-03-07

    申请号:US15695733

    申请日:2017-09-05

    Abstract: Systems and methods for branch prediction include detecting a subset of branch instructions which are not fixed direction branch instructions, and for this subset of branch instructions, utilizing complex branch prediction mechanisms such as a neural branch predictor. Detecting the subset of branch instructions includes using a state machine to determine the branch instructions whose outcomes change between a taken direction and a not-taken direction in separate instances of their execution. For the remaining branch instructions which are fixed direction branch instructions, the complex branch prediction techniques are avoided.

    PRIORITY-BASED CACHE-LINE FITTING IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

    公开(公告)号:US20230236979A1

    公开(公告)日:2023-07-27

    申请号:US17572472

    申请日:2022-01-10

    Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.

    Priority-Based Cache-Line Fitting in Compressed Memory Systems of Processor-Based Systems

    公开(公告)号:US20230236961A1

    公开(公告)日:2023-07-27

    申请号:US17572471

    申请日:2022-01-10

    CPC classification number: G06F12/023 G06F2212/401

    Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.

Patent Agency Ranking