HAZARD CHECKING
    12.
    发明申请
    HAZARD CHECKING 审中-公开

    公开(公告)号:US20170091097A1

    公开(公告)日:2017-03-30

    申请号:US15254233

    申请日:2016-09-01

    Applicant: ARM LIMITED

    Abstract: An apparatus comprises a translation lookaside buffer (TLB) comprising TLB entries for storing address translation data for translating virtual addresses to physical addresses. Hazard checking circuitry detects a hazard condition when two data access transactions correspond to the same physical address. The hazard checking circuitry includes a TLB entry identifier comparator to compare TLB entry identifiers identifying the TLB entries corresponding to the two data access transactions. The hazard condition is detected in dependence on whether the TLB entry identifiers match.

    CONTEXT SENSITIVE BARRIERS IN DATA PROCESSING
    14.
    发明申请
    CONTEXT SENSITIVE BARRIERS IN DATA PROCESSING 审中-公开
    数据处理中的敏感障碍

    公开(公告)号:US20160139922A1

    公开(公告)日:2016-05-19

    申请号:US14930920

    申请日:2015-11-03

    Applicant: ARM LIMITED

    Abstract: Apparatus for data processing and a method of data processing are provided, according to which the processing circuitry of the apparatus can access a memory system and execute data processing instructions in one context of multiple contexts which it supports. When the processing circuitry executes a barrier instruction, the resulting access ordering constraint may be limited to being enforced for accesses which have been initiated by the processing circuitry when operating in an identified context, which may for example be the context in which the barrier instruction has been executed. This provides a separation between the operation of the processing circuitry in its multiple possible contexts and in particular avoids delays in the completion of the access ordering constraint, for example relating to accesses to high latency regions of memory, from affecting the timing sensitivities of other contexts.

    Abstract translation: 提供了用于数据处理的装置和数据处理方法,根据该装置,该装置的处理电路可以访问存储器系统并在其支持的多个上下文的一个上下文中执行数据处理指令。 当处理电路执行屏障指令时,所得到的访问排序约束可以被限制为对于在所识别的上下文中操作时由处理电路启动的访问被强制执行,其可以例如是屏障指令具有的上下文 被执行 这提供了处理电路在其多个可能上下文中的操作之间的间隔,并且特别地避免了访问排序约束的完成中的延迟,例如涉及对存储器的高等待时间区域的访问,从而影响其他上下文的定时灵敏度 。

    PROCESSOR AND METHOD FOR PROCESSING INSTRUCTIONS USING AT LEAST ONE PROCESSING PIPELINE
    15.
    发明申请
    PROCESSOR AND METHOD FOR PROCESSING INSTRUCTIONS USING AT LEAST ONE PROCESSING PIPELINE 有权
    用于处理使用至少一个加工管道的说明书的处理器和方法

    公开(公告)号:US20140281423A1

    公开(公告)日:2014-09-18

    申请号:US13826553

    申请日:2013-03-14

    Applicant: ARM LIMITED

    CPC classification number: G06F9/30079 G06F9/3836 G06F9/3875 G06F9/3885

    Abstract: A processor has a processing pipeline with first, second and third stages. An instruction at the first stage takes fewer cycles to reach the second stage then the third stage. The second and third stages each have a duplicated processing resource. For a pending instruction which requires the duplicated resource and can be processed using the duplicated resource at either of the second and third stages, the first stage determines whether a required operand would be available when the pending instruction would reach the second stage. If the operand would be available, then the pending instruction is processed using the duplicated resource at the second stage, while if the operand would not be available in time then the instruction is processed using the duplicated resource in the third pipeline stage. This technique helps to reduce delays caused by data dependency hazards.

    Abstract translation: 处理器具有第一,第二和第三阶段的处理流水线。 第一阶段的指令需要更少的周期才能到达第二阶段,然后到第三阶段。 第二和第三阶段各有一个重复的处理资源。 对于要求复制的资源并且可以使用第二级和第三级中的任一级的重复资源来处理的等待指令,第一级确定当待命指令将到达第二级时所需的操作数是否可用。 如果操作数可用,则在第二阶段使用重复的资源处理挂起的指令,而如果操作数在时间上不可用,则使用第三流水线阶段中的重复资源处理指令。 这种技术有助于减少数据依赖性危害造成的延误。

Patent Agency Ranking