CACHE MAINTENANCE INSTRUCTION
    1.
    发明公开
    CACHE MAINTENANCE INSTRUCTION 审中-公开
    高速缓存维护指导

    公开(公告)号:EP3265917A1

    公开(公告)日:2018-01-10

    申请号:EP16701195.6

    申请日:2016-01-12

    申请人: ARM Limited

    IPC分类号: G06F12/08 G06F12/10

    摘要: An apparatus (2) comprises processing circuitry (4) for performing data processing in response to instructions. The processing circuitry (4) supports a cache maintenance instruction (50) specifying a virtual page address (52) identifying a virtual page of a virtual address space. In response to the cache maintenance instruction, the processing circuitry (4) triggers at least one cache (18, 20, 22) to perform a cache maintenance operation on one or more cache lines for which a physical address of the data stored by the cache line is within a physical page that corresponds to the virtual page identified by the virtual page address provided by the cache maintenance instruction.

    Apparatus and method for address translation of non-aligned double word virtual addresses
    2.
    发明公开
    Apparatus and method for address translation of non-aligned double word virtual addresses 失效
    装置和方法用于转换的虚拟地址的非对准双字。

    公开(公告)号:EP0377431A2

    公开(公告)日:1990-07-11

    申请号:EP90100011.7

    申请日:1990-01-02

    IPC分类号: G06F12/10 G06F12/04

    摘要: In a data processing system in which the execution unit is implemented to process aligned double word operands, apparatus and an associated method provide for the alignment of a double word operand that is stored across a double word boundary. The two double words each storing a word of the unaligned double word operand are identified and the attributes are compared with the ring number of the associated program. When the comparisons indicate that the two words of the non-aligned double word operand are available to the program, the two double word operands containing non-aligned words of the double word operand, and the two non-aligned words are stored in a register in an aligned orientation for processing by the execution unit.

    摘要翻译: 在其中执行单元实现为处理对准双字操作数,装置和相关方法,以提供一个双字操作数的对准的数据处理系统也被存储跨越双字边界。 存放未对齐双字操作数的一个字中的两个双字的每个被识别和属性与相关联的程序的环号相比。 当比较指示DASS死非对准双字操作数的两个词对程序可用,两个双字操作数包含双字操作数的非对齐的字,和两个非对齐的字被存储在一寄存器 在由所述执行单元用于处理对准取向。

    DATA PROCESSING SYSTEM INCLUDING A PREFETCH CIRCUIT
    3.
    发明授权
    DATA PROCESSING SYSTEM INCLUDING A PREFETCH CIRCUIT 失效
    包括预先电路的数据处理系统

    公开(公告)号:EP0235255B1

    公开(公告)日:1990-07-11

    申请号:EP86905533.5

    申请日:1986-08-21

    申请人: NCR CORPORATION

    IPC分类号: G06F9/38 G06F9/355 G06F12/02

    摘要: A data processing system includes a prefetch circuit for use with a memory (14). The prefetch circuit includes a storage buffer (204) for receiving a command from the memory (14), a decoding circuit for decoding the command to determine the address of an index register identified in the command for fetching the contents of the index register. The prefetch circuit also includes vertual and real address storage registers (221, 224) for receiving and storing the virtual and real addresses of the command, an adding circuit (236) for adding a predetermined offset to the virtual and real addresses of the command to obtain new virtual and real addresses, a comparison circuit (240) for determining if the new virtual address from the adding circuit (236) has crossed a virtual page boundary, a transfer circuit responsive to the comparison circuit (240) for transferring the real address in the real address storage register (224) to the adding circuit for adding the offset thereto, thereby obtaining a new real address. The fetch circuit then prefetches a command from the memory (14) at the new real address. The storage buffer (204) also includes registers for storing prefetched data and a prefetched index register.

    FACILITATING MEMORY ACCESSES
    5.
    发明公开
    FACILITATING MEMORY ACCESSES 有权
    ERMÖGLICHUNGDES ZUGANGS ZU EINEM SPEICHER

    公开(公告)号:EP2483783A1

    公开(公告)日:2012-08-08

    申请号:EP10760977.8

    申请日:2010-09-22

    IPC分类号: G06F12/10 G06F9/318

    摘要: In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted.

    摘要翻译: 在使用虚拟地址(或其他可间接使用的地址)访问内存的计算环境中,在访问内存之前,将虚拟地址转换为绝对地址(或其他可直接使用的地址)。 然而,为了便于存储器访问,在某些情况下,包括当要访问的数据与访问数据的指令在同一个存储单元内时,省略了地址转换。 在这种情况下,数据的绝对地址是从指令的绝对地址导出的,从而避免数据的地址转换。 此外,在某些情况下,也省略对数据的访问检查。

    COLLAPSIBLE FRONT-END TRANSLATION FOR INSTRUCTION FETCH
    6.
    发明公开
    COLLAPSIBLE FRONT-END TRANSLATION FOR INSTRUCTION FETCH 有权
    FALTBARES FRONT-END-GETRIEBE ZUM ABRUF VON INSTRUKTIONEN

    公开(公告)号:EP1994471A2

    公开(公告)日:2008-11-26

    申请号:EP07762866.7

    申请日:2007-02-01

    IPC分类号: G06F12/10 G06F9/32 G06F9/38

    摘要: Address translation for instruction fetching can be obviated for sequences of instruction instances that reside on a same page. Obviating address translation reduces power consumption and increases pipeline efficiency since accessing of an address translation buffer can be avoided. Certain events, such as branch mis-predictions and exceptions, can be designated as page boundary crossing events. In addition, carry over at a particular bit position when computing a branch target or a next instruction instance fetch target can also be designated as a page boundary crossing event. An address translation buffer is accessed to translate an address representation of a first instruction instance. However, until a page boundary crossing event occurs, the address representations of subsequent instruction instances are not translated. Instead, the translated portion of the address representation for the first instruction instance is recycled for the subsequent instruction instances.

    摘要翻译: 对于驻留在同一页面上的指令实例的序列,可以避免用于指令获取的地址转换。 由于可以避免地址转换缓冲区的访问,所以避免地址转换降低功耗并提高管道效率。 某些事件,如分支错误预测和例外,可以被指定为页边界交叉事件。 此外,当计算分支目标或下一指令实例获取目标时,在特定位位置进位也可以被指定为页边界交叉事件。 访问地址转换缓冲器以转换第一指令实例的地址表示。 然而,直到发生页边界交叉事件,后续指令实例的地址表示不被转换。 相反,第一个指令实例的地址表示的转换部分被循环用于后续指令实例。

    TRANSLATION LOOKASIDE BUFFER (TLB) SUPPRESSION FOR INTRA-PAGE PROGRAM COUNTER RELATIVE OR ABSOLUTE ADDRESS BRANCH INSTRUCTIONS
    8.
    发明公开
    TRANSLATION LOOKASIDE BUFFER (TLB) SUPPRESSION FOR INTRA-PAGE PROGRAM COUNTER RELATIVE OR ABSOLUTE ADDRESS BRANCH INSTRUCTIONS 审中-公开
    翻译后备缓冲器(TLB)FOR命令计数器相对或绝对地址跳转指令中的一个边减速

    公开(公告)号:EP1836561A1

    公开(公告)日:2007-09-26

    申请号:EP05849255.4

    申请日:2005-11-17

    IPC分类号: G06F9/32 G06F1/32 G06F12/10

    摘要: In a pipelined processor, a pre-decoder in advance of an instruction cache calculates the branch target address (BTA) of PC-relative and absolute address branch instructions. The pre-decoder compares the BTA with the branch instruction address (BIA) to determine whether the target and instruction are in the same memory page. A branch target same page (BTSP) bit indicating this is written to the cache and associated with the instruction. When the branch is executed and evaluated as taken, a TLB access to check permission attributes for the BTA is suppressed if the BTA is in the same page as the BIA, as indicated by the BTSP bit. This reduces power consumption as the TLB access is suppressed and the BTA/BIA comparison is only performed once, when the branch instruction is first fetched. Additionally, the pre-decoder removes the BTA/BIA comparison from the BTA generation and selection critical path.

    摘要翻译: 在流水线处理器,预解码器预先指令高速缓存的计算的PC相对和绝对地址的转移指令的分支目标地址(BTA)。 预解码器比较BTA与分支指令地址(BIA),以确定是否矿目标和指令是在同一个存储器页面。 分支目标相同的页面(BTSP)位,表明这是写入缓存和与指令相关联。 当分支执行和评估的考虑,一个TLB访问权限检查属性的BTA是,如果BTA是在同一页BIA,抑制由BTSP位所示。 这降低了功耗为TLB存取被抑制,BTA / BIA比较只执行一次,当分支指令是第一取出。 另外,该预解码器中删除从所述BTA发生和选择关键路径的BTA / BIA比较。

    Rapid data retrieval from a physically addressed data storage structure using memory page crossing predictive annotations
    9.
    发明公开
    Rapid data retrieval from a physically addressed data storage structure using memory page crossing predictive annotations 失效
    从身体快速的数据传输使用从页数限制的预测功能都超过了解决数据存储结构。

    公开(公告)号:EP0652521A1

    公开(公告)日:1995-05-10

    申请号:EP94307724.8

    申请日:1994-10-20

    发明人: Yung, Robert

    IPC分类号: G06F12/10

    CPC分类号: G06F12/1054 G06F2212/655

    摘要: In a computer system having a number of page partitioned and virtually addressed address spaces, a physically addressed data storage structure and its complementary selection data storage structure is provided with a complementary memory page crossing prediction storage structure, a latch, and a comparator. The memory page crossing prediction storage structure is used to store a number of memory page crossing predictive annotations corresponding to the contents of the data and selection data storage structures. Each memory page crossing predictive annotation predicts whether the current access crosses into a new memory page. The latch is used to successively record a first portion of each accessing physical address translated from a corresponding portion of each accessing virtual address. The recorded first portion of the physical address of the immediately preceding access is used to select data currently being read out of the storage structures, if the memory page crossing predictive annotation currently being read out predicts no memory page crossing. The comparator is used to determine whether the first portions of the physical addresses of the current and immediately preceding accesses are equal, if the first portion of the physical address of the immediately preceding access is used to select data for the current access. Remedial actions including invalidating the selected data and correcting the incorrect memory page crossing predictive annotation are taken, if the two physical address portions are determined to be unequal. As a result, most of the data retrievals are made without having to wait for the first portions of the accessing physical addresses to be translated, thereby improving the performance of retrieving data from the physically addressed data storage structure.

    摘要翻译: 在具有多个页面的,分配虚拟寻址的地址空间,物理寻址数据存储结构和它的互补选择数据存储结构的计算机系统设置有一个互补的存储器页跨越预测存储结构,锁存器,和一个比较器。 存储器页跨越预测存储结构用于存储若干个存储器页面的交叉预测注解对应于该数据和选择数据存储结构的内容。 渡预测注释每个内存页面预测无论当前访问跨越到一个新的内存页面。 闩锁被用来连续地记录来自每个相应的访问虚拟地址的一部分翻译每个访问的物理地址的第一部分。 的紧接preceding-访问的物理地址的所述第一记录部分,用于选择当前正在读出的存储结构的数据时,如果被读出跨越当前预测注释存储器页预测无存储器页跨越。 该比较器被用于确定性地矿是否当前和立即preceding-访问的物理地址的所述第一部分是相等的,如果满足立即preceding-访问的物理地址的所述第一部分是用来为当前接入选择数据。 补救措施包括所选择的数据无效并校正不正确的存储器页面交叉预测注释采取,如果这两个物理地址部分是确定性的开采是不相等的。 其结果是,大部分的数据检索的被而无需等待访问的物理地址的所述第一部分被翻译,从而提高从物理寻址数据存储结构中检索数据的性能作出。

    Method and apparatus for predicting valid performance of virtual-address to physical-address translations
    10.
    发明公开
    Method and apparatus for predicting valid performance of virtual-address to physical-address translations 失效
    预测正确执行虚拟到物理地址翻译的方法和设备。

    公开(公告)号:EP0352632A2

    公开(公告)日:1990-01-31

    申请号:EP89113323.3

    申请日:1989-07-20

    IPC分类号: G06F12/10

    CPC分类号: G06F12/10 G06F2212/655

    摘要: A prediction logic device operating in conjunction with a vector processor to predict, before the completion of the translation of the virtual addresses of all of the data elements of a vector, the valid performace of all virtual-address to physical-address translations for the data elements of the vector. The prediction logic device asserts an MMOK signal to a scalar processor when it becomes known that no memory management fault and/or translation buffer miss will occur such that the scalar processor can resume vector instruction issue to the vector processor at the earliest possible time.

    摘要翻译: 结合一个矢量处理器中操作的预测逻辑设备来预测,所有的向量,所有虚拟地址到物理地址转换为数据的有效服务表现的数据元素中的所述虚拟地址的转换完成之前 向量的元素。 当它变成已知根本没有存储器管理故障和/或翻译缓冲器未命中将发生检查并标量处理器能在尽可能早的时间恢复矢量指令发布到所述矢量处理器的预测逻辑设备断言到MMOK信号到一个标​​量处理器。