Selectively performing ahead branch prediction based on types of branch instructions

    公开(公告)号:US10732979B2

    公开(公告)日:2020-08-04

    申请号:US16011010

    申请日:2018-06-18

    Abstract: A set of entries in a branch prediction structure for a set of second blocks are accessed based on a first address of a first block. The set of second blocks correspond to outcomes of one or more first branch instructions in the first block. Speculative prediction of outcomes of second branch instructions in the second blocks is initiated based on the entries in the branch prediction structure. State associated with the speculative prediction is selectively flushed based on types of the branch instructions. In some cases, the branch predictor can be accessed using an address of a previous block or a current block. State associated with the speculative prediction is selectively flushed from the ahead branch prediction, and prediction of outcomes of branch instructions in one of the second blocks is selectively initiated using non-ahead accessing, based on the types of the one or more branch instructions.

    Stride prefetching across memory pages
    12.
    发明授权

    公开(公告)号:US10671535B2

    公开(公告)日:2020-06-02

    申请号:US13944148

    申请日:2013-07-17

    Abstract: A prefetcher maintains the state of stored prefetch information, such as a prefetch confidence level, when a prefetch would cross a memory page boundary. The maintained prefetch information can be used both to identify whether the stride pattern for a particular sequence of demand requests persists after the memory page boundary has been crossed, and to continue to issue prefetch requests according to the identified pattern. The prefetcher therefore does not have re-identify a stride pattern each time a page boundary is crossed by a sequence of demand requests, thereby improving the efficiency and accuracy of the prefetcher.

    USING RETURN ADDRESS PREDICTOR TO SPEED UP CONTROL STACK RETURN ADDRESS VERIFICATION

    公开(公告)号:US20200034144A1

    公开(公告)日:2020-01-30

    申请号:US16046949

    申请日:2018-07-26

    Abstract: Overhead associated with verifying function return addresses to protect against security exploits is reduced by taking advantage of branch prediction mechanisms for predicting return addresses. More specifically, returning from a function includes popping a return address from a data stack. Well-known security exploits overwrite the return address on the data stack to hijack control flow. In some processors, a separate data structure referred to as a control stack is used to verify the data stack. When a return instruction is executed, the processor issues an exception if the return addresses on the control stack and the data stack are not identical. This overhead can be avoided by taking advantage of the return address stack, which is a data structure used by the branch predictor to predict return addresses. In most situations, if this prediction is correct, the above check does not need to occur, thus reducing the associated overhead.

    Processor with accelerated lock instruction operation

    公开(公告)号:US10949201B2

    公开(公告)日:2021-03-16

    申请号:US16286702

    申请日:2019-02-27

    Abstract: A processor and method for handling lock instructions identifies which of a plurality of older store instructions relative to a current lock instruction are able to be locked. The method and processor lock the identified older store instructions as an atomic group with the current lock instruction. The method and processor negatively acknowledge probes until all of the older store instructions in the atomic group have written to cache memory. In some implementations, an atomic grouping unit issues an indication to lock identified older store instructions that are retired and lockable, and in some implementations, also issues an indication to lock older stores that are determined to be lockable that are non-retired.

    Using loop exit prediction to accelerate or suppress loop mode of a processor

    公开(公告)号:US10915322B2

    公开(公告)日:2021-02-09

    申请号:US16134440

    申请日:2018-09-18

    Abstract: A processor predicts a number of loop iterations associated with a set of loop instructions. In response to the predicted number of loop iterations exceeding a first loop iteration threshold, the set of loop instructions are executed in a loop mode that includes placing at least one component of an instruction pipeline of the processor in a low-power mode or state and executing the set of loop instructions from a loop buffer. In response to the predicted number of loop iterations being less than or equal to a second loop iteration threshold, the set of instructions are executed in a non-loop mode that includes maintaining at least one component of the instruction pipeline in a powered up state and executing the set of loop instructions from an instruction fetch unit of the instruction pipeline.

    Variable distance bypass between tag array and data array pipelines in a cache
    19.
    发明授权
    Variable distance bypass between tag array and data array pipelines in a cache 有权
    缓存中标签阵列与数据阵列管道之间的可变距离旁路

    公开(公告)号:US09529720B2

    公开(公告)日:2016-12-27

    申请号:US13912809

    申请日:2013-06-07

    CPC classification number: G06F12/0855 G06F12/0844 G06F12/0846

    Abstract: The present application describes embodiments of techniques for picking a data array lookup request for execution in a data array pipeline a variable number of cycles behind a corresponding tag array lookup request that is concurrently executing in a tag array pipeline. Some embodiments of a method for picking the data array lookup request include picking the data array lookup request for execution in a data array pipeline of a cache concurrently with execution of a tag array lookup request in a tag array pipeline of the cache. The data array lookup request is picked for execution in response to resources of the data array pipeline becoming available after picking the tag array lookup request for execution. Some embodiments of the method may be implemented in a cache.

    Abstract translation: 本申请描述了用于在数据阵列流水线中选择用于执行数据阵列查找请求的技术的实施例,该数据阵列查找请求在标签阵列管线中同时执行的对应的标签数组查找请求后面的可变数量的循环。 用于选择数据阵列查找请求的方法的一些实施例包括在高速缓存的标签阵列管线中执行标签阵列查找请求的同时,在高速缓存的数据阵列流水线中选择用于执行的数据阵列查找请求。 选择数据数组查找请求以在执行标签数组查找请求之后响应于数据数组流水线变得可用的资源进行执行。 该方法的一些实施例可以在高速缓存中实现。

    METHOD AND APPARATUS FOR PERFORMING A BUS LOCK AND TRANSLATION LOOKASIDE BUFFER INVALIDATION
    20.
    发明申请
    METHOD AND APPARATUS FOR PERFORMING A BUS LOCK AND TRANSLATION LOOKASIDE BUFFER INVALIDATION 有权
    用于执行总线锁定和翻译LOOKASIDE缓冲器无效的方法和装置

    公开(公告)号:US20150120976A1

    公开(公告)日:2015-04-30

    申请号:US14522137

    申请日:2014-10-23

    Abstract: A method and apparatus for performing a bus lock and a translation lookaside buffer invalidate transaction includes receiving, by a lock master, a lock request from a first processor in a system. The lock master sends a quiesce request to all processors in the system, and upon receipt of the quiesce request from the lock master, all processors cease issuing any new transactions and issue a quiesce granted transaction. Upon receipt of the quiesce granted transactions from all processors, the lock master issues a lock granted message that includes an identifier of the first processor. The first processor performs an atomic transaction sequence and sends a first lock release message to the lock master upon completion of the atomic transaction sequence. The lock master sends a second lock release message to all processors upon receiving the first lock release message from the first processor.

    Abstract translation: 用于执行总线锁定和翻译后备缓冲器无效事务的方法和装置包括由锁定主机接收来自系统中的第一处理器的锁定请求。 锁定主机向系统中的所有处理器发送静默请求,并且在收到来自锁定主机的停顿请求后,所有处理器都停止发出任何新的事务并发出静默授权交易。 在从所有处理器接收到暂停许可的交易之后,锁定主机发出包含第一处理器的标识符的锁授予消息。 第一个处理器执行原子事务序列,并在原子事务序列完成时向锁主机发送第一个锁定释放消息。 当从第一处理器接收到第一锁定释放消息时,锁定主机向所有处理器发送第二锁定释放消息。

Patent Agency Ranking