METHOD AND APPARATUS FOR VIRTUALIZING THE MICRO-OP CACHE

    公开(公告)号:US20200019406A1

    公开(公告)日:2020-01-16

    申请号:US16034844

    申请日:2018-07-13

    Abstract: Systems, apparatuses, and methods for virtualizing a micro-operation cache are disclosed. A processor includes at least a micro-operation cache, a conventional cache subsystem, a decode unit, and control logic. The decode unit decodes instructions into micro-operations which are then stored in the micro-operation cache. The micro-operation cache has limited capacity for storing micro-operations. When new micro-operations are decoded from pending instructions, existing micro-operations are evicted from the micro-operation cache to make room for the new micro-operations. Rather than being discarded, micro-operations evicted from the micro-operation cache are stored in the conventional cache subsystem. This prevents the original instruction from having to be decoded again on subsequent executions. When the control logic determines that micro-operations for one or more fetched instructions are stored in either the micro-operation cache or the conventional cache subsystem, the control logic causes the decode unit to transition to a reduced-power state.

    Bit error protection in cache memories

    公开(公告)号:US10379944B2

    公开(公告)日:2019-08-13

    申请号:US15489438

    申请日:2017-04-17

    Abstract: A computing device having a cache memory (or “cache”) is described, as is a method for operating the cache. The method for operating the cache includes maintaining, in a history record, a representation of a number of bit errors detected in a portion of the cache. When the history record indicates that no bit errors or a single-bit bit error was detected in the portion of the cache memory, the method includes selecting, based on the history record, an error protection to be used for the portion of the cache memory. When the history record indicates that a multi-bit bit error was detected in the portion of the cache memory, the method includes disabling the portion of the cache memory.

    CONTROLLING THE OPERATING SPEED OF STAGES OF AN ASYNCHRONOUS PIPELINE

    公开(公告)号:US20180024837A1

    公开(公告)日:2018-01-25

    申请号:US15216094

    申请日:2016-07-21

    Abstract: An asynchronous pipeline includes a first stage and one or more second stages. A controller provides control signals to the first stage to indicate a modification to an operating speed of the first stage. The modification is determined based on a comparison of a completion status of the first stage to one or more completion statuses of the one or more second stages. In some cases, the controller provides control signals indicating modifications to an operating voltage applied to the first stage and a drive strength of a buffer in the first stage. Modules can be used to determine the completion statuses of the first stage and the one or more second stages based on the monitored output signals generated by the stages, output signals from replica critical paths associated with the stages, or a lookup table that indicates estimated completion times.

    STRIDE PREFETCHING ACROSS MEMORY PAGES
    55.
    发明申请
    STRIDE PREFETCHING ACROSS MEMORY PAGES 审中-公开
    横向记录页面

    公开(公告)号:US20150026414A1

    公开(公告)日:2015-01-22

    申请号:US13944148

    申请日:2013-07-17

    Abstract: A prefetcher maintains the state of stored prefetch information, such as a prefetch confidence level, when a prefetch would cross a memory page boundary. The maintained prefetch information can be used both to identify whether the stride pattern for a particular sequence of demand requests persists after the memory page boundary has been crossed, and to continue to issue prefetch requests according to the identified pattern. The prefetcher therefore does not have re-identify a stride pattern each time a page boundary is crossed by a sequence of demand requests, thereby improving the efficiency and accuracy of the prefetcher.

    Abstract translation: 当预取将跨越内存页边界时,预取器维护存储的预取信息的状态,例如预取置信水平。 可以使用维护的预取信息来识别在存储器页边界已经被越过之后特定的请求请求序列的步幅模式是否持续,并且根据所识别的模式继续发出预取请求。 因此,每次通过一系列请求请求来划分页边界时,预取器不会重新识别步幅,从而提高预取器的效率和准确性。

    METHOD AND APPARATUS FOR CACHE CONTROL
    56.
    发明申请
    METHOD AND APPARATUS FOR CACHE CONTROL 有权
    缓存控制的方法和设备

    公开(公告)号:US20130227321A1

    公开(公告)日:2013-08-29

    申请号:US13854616

    申请日:2013-04-01

    Abstract: A method and apparatus for dynamically controlling a cache size is disclosed. In one embodiment, a method includes changing an operating point of a processor from a first operating point to a second operating point, and selectively removing power from one or more ways of a cache memory responsive to changing the operating point. The method further includes processing one or more instructions in the processor subsequent to removing power from the one or more ways of the cache memory, wherein said processing includes accessing one or more ways of the cache memory from which power was not removed.

    Abstract translation: 公开了一种用于动态控制高速缓存大小的方法和装置。 在一个实施例中,一种方法包括将处理器的操作点从第一操作点改变到第二操作点,以及响应于改变操作点而选择性地从高速缓冲存储器的一种或多种方式去除功率。 该方法还包括在从高速缓冲存储器的一个或多个方式移除电力之后处理处理器中的一个或多个指令,其中所述处理包括访问未去除功率的高速缓冲存储器的一种或多种方式。

    Reusing remote registers in processing in memory

    公开(公告)号:US12175073B2

    公开(公告)日:2024-12-24

    申请号:US17139496

    申请日:2020-12-31

    Abstract: Systems, apparatuses, and methods for reusing remote registers in processing in memory (PIM) are disclosed. A system includes at least a host processor, a memory controller, and a PIM device. When the memory controller receives, from the host processor, an operation targeting the PIM device, the memory controller determines whether an optimization can be applied to the operation. The memory controller converts the operation into N PIM commands if the optimization is not applicable. Otherwise, the memory controller converts the operation into a N−1 PIM commands if the optimization is applicable. For example, if the operation involves reusing a constant value, a copy command can be omitted, resulting in memory bandwidth reduction and power consumption savings. In one scenario, the memory controller includes a constant-value cache, and the memory controller performs a lookup of the constant-value cache to determine if the optimization is applicable for a given operation.

    Offloading computations from a processor to remote execution logic

    公开(公告)号:US12073251B2

    公开(公告)日:2024-08-27

    申请号:US17136767

    申请日:2020-12-29

    CPC classification number: G06F9/5027

    Abstract: Offloading computations from a processor to remote execution logic is disclosed. Offload instructions for remote execution on a remote device are dispatched in the form of processor instructions like conventional instructions. In the processor, an offload instruction is inserted in an offload queue. The offload instruction may be inserted at the dispatch stage or the retire stage of the processor pipeline. Metadata for the offload instruction is added to the offload instruction in the offload queue. After retirement of the offload instruction, the processor transmits an offload request generated from the offload instruction.

Patent Agency Ranking