Accelerated code optimizer for a multiengine microprocessor

    公开(公告)号:US10191746B2

    公开(公告)日:2019-01-29

    申请号:US14360284

    申请日:2011-11-22

    Abstract: A method for accelerating code optimization a microprocessor. The method includes fetching an incoming microinstruction sequence using an instruction fetch component and transferring the fetched macroinstructions to a decoding component for decoding into microinstructions. Optimization processing is performed by reordering the microinstruction sequence into an optimized microinstruction sequence comprising a plurality of dependent code groups. The plurality of dependent code groups are then output to a plurality of engines of the microprocessor for execution in parallel. A copy of the optimized microinstruction sequence is stored into a sequence cache for subsequent use upon a subsequent hit optimized microinstruction sequence.

    Systems and methods for supporting a plurality of load and store accesses of a cache
    54.
    发明授权
    Systems and methods for supporting a plurality of load and store accesses of a cache 有权
    用于支持多个加载和存储高速缓存的访问的系统和方法

    公开(公告)号:US09229873B2

    公开(公告)日:2016-01-05

    申请号:US13561570

    申请日:2012-07-30

    Abstract: Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.

    Abstract translation: 公开了用于支持多个加载和存储高速缓存的访问的系统和方法。 响应于访问加载高速缓存的多个块的块的多个请求的请求,并行地访问加载高速缓存块和存储合并高速缓存的逻辑和物理配对块。 从加载缓存的块访问的数据被通过以每个字节为单位的合并而从存储合并缓存的块中访问的数据覆盖。 访问被提供给合并的数据。

    CACHE REPLACEMENT POLICY
    55.
    发明申请
    CACHE REPLACEMENT POLICY 有权
    缓存更换策略

    公开(公告)号:US20150286576A1

    公开(公告)日:2015-10-08

    申请号:US14385968

    申请日:2011-12-16

    Abstract: Cache replacement policy. In accordance with a first embodiment of the present invention, an apparatus comprises a queue memory structure configured to queue cache requests that miss a second cache after missing a first cache. The apparatus comprises additional memory associated with the queue memory structure is configured to record an evict way of the cache requests for the cache. The apparatus may be further configured to lock the evict way recorded in the additional memory, for example, to prevent reuse of the evict way. The apparatus may be further configured to unlock the evict way responsive to a fill from the second cache to the cache. The additional memory may be a component of a higher level cache.

    Abstract translation: 缓存替换策略。 根据本发明的第一实施例,一种装置包括队列存储器结构,其被配置为对缺少第一高速缓存之后错过第二高速缓存的高速缓存请求进行排队。 所述装置包括与所述队列存储器结构相关联的附加存储器,被配置为记录所述高速缓存的高速缓存请求的逐出方式。 该装置还可以被配置为锁定记录在附加存储器中的驱逐方式,例如以防止重新使用驱逐方式。 该装置还可以被配置为响应于从第二高速缓存到高速缓存的填充来解锁驱逐方式。 附加存储器可以是较高级别高速缓存的组件。

    SYSTEMS AND METHODS FOR LOAD CANCELING IN A PROCESSOR THAT IS CONNECTED TO AN EXTERNAL INTERCONNECT FABRIC
    57.
    发明申请
    SYSTEMS AND METHODS FOR LOAD CANCELING IN A PROCESSOR THAT IS CONNECTED TO AN EXTERNAL INTERCONNECT FABRIC 有权
    在与外部互连织物连接的处理器中取消加载的系统和方法

    公开(公告)号:US20140108729A1

    公开(公告)日:2014-04-17

    申请号:US13649505

    申请日:2012-10-11

    Abstract: Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch. Data corresponding to one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor is accessed and returned to cache as prefetch data. The prefetch data is retired in a cache location of the processor.

    Abstract translation: 公开了在连接到外部互连结构的处理器中用于负载消除的系统和方法。 作为在连接到外部总线的处理器中的负载消除的方法的一部分,并且响应于刷新请求以及来自加载队列的等待的推测负载的相应取消,一种或多种待处理的投机负载 位于处理器外部的指令流水线中,从负载转换为预取。 对应于位于处理器外部的指令流水线中的一个或多个未决投机负载的数据被访问并作为预取数据返回到高速缓存。 预取数据在处理器的高速缓存位置中退出。

    SYSTEMS AND METHODS FOR SUPPORTING A PLURALITY OF LOAD AND STORE ACCESSES OF A CACHE
    59.
    发明申请
    SYSTEMS AND METHODS FOR SUPPORTING A PLURALITY OF LOAD AND STORE ACCESSES OF A CACHE 有权
    用于支持高速缓存的多重负载和存储访问的系统和方法

    公开(公告)号:US20140032846A1

    公开(公告)日:2014-01-30

    申请号:US13561570

    申请日:2012-07-30

    Abstract: Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.

    Abstract translation: 公开了用于支持多个加载和存储高速缓存的访问的系统和方法。 响应于访问加载高速缓存的多个块的块的多个请求的请求,并行地访问加载高速缓存块和存储合并高速缓存的逻辑和物理配对块。 从加载缓存的块访问的数据被通过以每个字节为单位的合并而从存储合并缓存的块中访问的数据覆盖。 访问被提供给合并的数据。

    SYSTEMS AND METHODS FOR FLUSHING A CACHE WITH MODIFIED DATA
    60.
    发明申请
    SYSTEMS AND METHODS FOR FLUSHING A CACHE WITH MODIFIED DATA 有权
    用改进的数据冲洗缓存的系统和方法

    公开(公告)号:US20140032844A1

    公开(公告)日:2014-01-30

    申请号:US13561491

    申请日:2012-07-30

    Abstract: Systems and methods for flushing a cache with modified data are disclosed. Responsive to a request to flush data from a cache with modified data to a next level cache that does not include the cache with modified data, the cache with modified data is accessed using an index and a way and an address associated with the index and the way is secured. Using the address, the cache with modified data is accessed a second time and an entry that is associated with the address is retrieved from the cache with modified data. The entry is placed into a location of the next level cache.

    Abstract translation: 公开了用修改的数据冲洗高速缓存的系统和方法。 响应于将具有修改的数据的缓存中的数据刷新到不包括具有修改的数据的高速缓存的下一级高速缓存的请求,具有修改的数据的高速缓存使用索引和与索引相关联的方式和地址来访问, 方式是有保障的。 使用该地址,第二次访问具有修改数据的高速缓存,并且使用修改的数据从缓存中检索与该地址相关联的条目。 该条目被放置在下一级缓存的位置。

Patent Agency Ranking