SPECIALIZED MEMORY DISAMBIGUATION MECHANISMS FOR DIFFERENT MEMORY READ ACCESS TYPES
    2.
    发明申请
    SPECIALIZED MEMORY DISAMBIGUATION MECHANISMS FOR DIFFERENT MEMORY READ ACCESS TYPES 有权
    专门针对不同内存读取存取格式的存储器分配机制

    公开(公告)号:US20150067305A1

    公开(公告)日:2015-03-05

    申请号:US14015282

    申请日:2013-08-30

    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.

    Abstract translation: 一种用于有效预测和处理内存访问依赖关系的系统和方法。 计算系统包括控制逻辑,其将检测到的加载指令标记为响应于预测加载指令具有高局部性并且是存储到加载(STL)数据转发的候选者的第一类型。 控制逻辑将检测到的加载指令标记为响应于预测加载指令具有低局部性而不是STL数据转发的候选的第二类型。 控制逻辑处理标记为第一类型的加载指令,就像加载指令取决于较旧的存储操作一样。 控制逻辑处理标记为第二类型的加载指令,就像加载指令独立于任何较旧的存储操作一样。

    Using predictions for store-to-load forwarding
    3.
    发明授权
    Using predictions for store-to-load forwarding 有权
    使用商店到装载转发的预测

    公开(公告)号:US09367455B2

    公开(公告)日:2016-06-14

    申请号:US14018562

    申请日:2013-09-05

    Abstract: The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load.

    Abstract translation: 所描述的实施例包括使用对存储到负载转发的预测的核心。 在所描述的实施例中,核心包括加载存储单元,存储缓冲器和预测机制。 在运行期间,预测机制产生一个预测,即使用从存储缓冲器转发的数据来满足负载,因为负载从栈中的存储器位置加载数据。 基于该预测,加载存储单元首先向存储缓冲器发送对数据的请求,以尝试使用从存储缓冲器转发的数据来满足负载。 如果从存储缓冲区返回数据,则使用该数据来满足负载。 然而,如果使用从存储缓冲器转发的数据来满足负载的尝试不成功,则加载存储单元然后分别向缓存发送用于满足负载的数据请求。

    STACK CACHE MANAGEMENT AND COHERENCE TECHNIQUES
    6.
    发明申请
    STACK CACHE MANAGEMENT AND COHERENCE TECHNIQUES 有权
    堆栈缓存管理和协调技术

    公开(公告)号:US20140143497A1

    公开(公告)日:2014-05-22

    申请号:US13887196

    申请日:2013-05-03

    Abstract: A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data.

    Abstract translation: 这里呈现的处理器系统具有多个执行内核和多个堆栈高速缓存,其中每个堆栈高速缓存与不同的执行核心相关联。 此处介绍了处理器系统的堆栈数据管理方法。 该方法维护多个执行核心的堆栈高速缓存管理器。 堆栈缓存管理器包括由多个执行核心访问的堆栈数据的条目。 该方法对于多个执行核心的请求执行核心处理所请求的堆栈数据的虚拟地址。 该方法通过访问堆栈高速缓存管理器来继续搜索包括所请求的堆栈数据的虚拟地址的堆栈高速缓存管理器的条目,并使用条目中的信息来检索所请求的堆栈数据。

    Specialized memory disambiguation mechanisms for different memory read access types
    8.
    发明授权
    Specialized memory disambiguation mechanisms for different memory read access types 有权
    针对不同内存读取访问类型的专门的内存消歧机制

    公开(公告)号:US09524164B2

    公开(公告)日:2016-12-20

    申请号:US14015282

    申请日:2013-08-30

    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.

    Abstract translation: 一种用于有效预测和处理内存访问依赖关系的系统和方法。 计算系统包括控制逻辑,其将检测到的加载指令标记为响应于预测加载指令具有高局部性并且是存储到加载(STL)数据转发的候选者的第一类型。 控制逻辑将检测到的加载指令标记为响应于预测加载指令具有低局部性而不是STL数据转发的候选的第二类型。 控制逻辑处理标记为第一类型的加载指令,就像加载指令取决于较旧的存储操作一样。 控制逻辑处理标记为第二类型的加载指令,就像加载指令独立于任何较旧的存储操作一样。

    Stack cache management and coherence techniques
    9.
    发明授权
    Stack cache management and coherence techniques 有权
    堆栈缓存管理和一致性技术

    公开(公告)号:US09189399B2

    公开(公告)日:2015-11-17

    申请号:US13887196

    申请日:2013-05-03

    Abstract: A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data.

    Abstract translation: 这里呈现的处理器系统具有多个执行内核和多个堆栈高速缓存,其中每个堆栈高速缓存与不同的执行核心相关联。 此处介绍了处理器系统的堆栈数据管理方法。 该方法维护多个执行核心的堆栈高速缓存管理器。 堆栈缓存管理器包括由多个执行核心访问的堆栈数据的条目。 该方法对于多个执行核心的请求执行核心处理所请求的堆栈数据的虚拟地址。 该方法通过访问堆栈高速缓存管理器来继续搜索包括所请求的堆栈数据的虚拟地址的堆栈高速缓存管理器的条目,并使用条目中的信息来检索所请求的堆栈数据。

Patent Agency Ranking