Lookahead scheme for prioritized reads
    1.
    发明授权
    Lookahead scheme for prioritized reads 有权
    优先阅读的前瞻方案

    公开(公告)号:US09009369B2

    公开(公告)日:2015-04-14

    申请号:US13282873

    申请日:2011-10-27

    摘要: A circular queue implementing a scheme for prioritized reads is disclosed. In one embodiment, a circular queue (or buffer) includes a number of storage locations each configured to store a data value. A multiplexer tree is coupled between the storage locations and a read port. A priority circuit is configured to generate and provide selection signals to each multiplexer of the multiplexer tree, based on a priority scheme. Based on the states of the selection signals, one of the storage locations is coupled to the read port via the multiplexers of the multiplexer tree.

    摘要翻译: 公开了实现优先读取方案的循环队列。 在一个实施例中,循环队列(或缓冲器)包括多个存储位置,每个存储位置被配置为存储数据值。 复用器树耦合在存储位置和读端口之间。 优先级电路被配置为基于优先级方案来生成并提供对多路复用器树的每个多路复用器的选择信号。 基于选择信号的状态,其中一个存储位置经由复用器树的多路复用器耦合到读端口。

    Efficient handling of misaligned loads and stores
    5.
    发明授权
    Efficient handling of misaligned loads and stores 有权
    高效处理不对齐的负载和商店

    公开(公告)号:US09131899B2

    公开(公告)日:2015-09-15

    申请号:US13177192

    申请日:2011-07-06

    摘要: A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue.

    摘要翻译: 一种用于有效地处理处理器内的未对准存储器访问的系统和方法。 处理器包括具有分组数据高速缓存(d-cache)和分组存储队列的加载存储单元(LSU)。 处理器产生对应于识别第一高速缓存行的存储器访问指令的第一地址。 处理器确定在高速缓存线边界上跨越的存储器访问未对准。 处理器生成标识逻辑上与第一高速缓存线相邻的第二高速缓存线的第二地址。 如果指令是加载指令,则LSU同时访问d-cache并存储具有第一和第二地址的队列。 如果有两个命中,则同时读出来自两条缓存行的数据。 如果访问是存储指令,则LSU将相关联的写入数据分成两个子集,并且将这些子集同时存储在存储队列的单独的存储区中的单独的高速缓存行中。

    EFFICIENT HANDLING OF MISALIGNED LOADS AND STORES
    6.
    发明申请
    EFFICIENT HANDLING OF MISALIGNED LOADS AND STORES 有权
    有效处理缺陷货物和仓库

    公开(公告)号:US20130013862A1

    公开(公告)日:2013-01-10

    申请号:US13177192

    申请日:2011-07-06

    IPC分类号: G06F12/08

    摘要: A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue.

    摘要翻译: 一种用于有效地处理处理器内的未对准存储器访问的系统和方法。 处理器包括具有分组数据高速缓存(d-cache)和分组存储队列的加载存储单元(LSU)。 处理器产生对应于识别第一高速缓存行的存储器访问指令的第一地址。 处理器确定在高速缓存线边界上跨越的存储器访问未对准。 处理器生成标识逻辑上与第一高速缓存线相邻的第二高速缓存线的第二地址。 如果指令是加载指令,则LSU同时访问d-cache并存储具有第一和第二地址的队列。 如果有两个命中,则同时读出来自两条缓存行的数据。 如果访问是存储指令,则LSU将相关联的写入数据分成两个子集,并且将这些子集同时存储在存储队列的单独的存储区中的单独的高速缓存行中。