Reducing conflicts in direct mapped caches

    公开(公告)号:US10901899B2

    公开(公告)日:2021-01-26

    申请号:US16408870

    申请日:2019-05-10

    Abstract: A processor includes a core to execute a transaction with a memory via cache; and cache controller having an index mapper circuit to: identify a physical memory address associated with the transaction and having a plurality of bits; determine, based on the plurality of bits, a first set of bits encoding a tag value, a second set of bits encoding a page index value, and a third set of bits encoding a line index value; determine a mapping function corresponding to the tag value; determine, using the mapping function, a bit-placement order; combine, based on the order, second and third set of bits to form an index; generate, using the index, a mapping from the address to a cache line index value identifying a cache line in the cache; and wherein the cache controller is further to access, using the mapping and in response to the transaction, the cache line.

    Utilization of processor capacity at low operating frequencies
    4.
    发明授权
    Utilization of processor capacity at low operating frequencies 有权
    处理器容量在低工作频率下的利用率

    公开(公告)号:US09361234B2

    公开(公告)日:2016-06-07

    申请号:US14933378

    申请日:2015-11-05

    Abstract: In an embodiment, a processor includes one or more cores including a first core operable at an operating voltage between a minimum operating voltage and a maximum operating voltage. The processor also includes a power control unit including first logic to enable coupling of ancillary logic to the first core responsive to the operating voltage being less than or equal to a threshold voltage, and to disable the coupling of the ancillary logic to the first core responsive to the operating voltage being greater than the threshold voltage. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括一个或多个核,包括可在最小工作电压和最大工作电压之间的工作电压下工作的第一核。 处理器还包括功率控制单元,其包括第一逻辑,以便响应于小于或等于阈值电压的工作电压来使辅助逻辑耦合到第一核心,并且禁用辅助逻辑与第一核心的耦合响应 使工作电压大于阈值电压。 描述和要求保护其他实施例。

    Apparatus and method to reduce bandwidth and latency overheads of probabilistic caches

    公开(公告)号:US12124371B2

    公开(公告)日:2024-10-22

    申请号:US17214356

    申请日:2021-03-26

    CPC classification number: G06F12/0815 G06F12/0895 G06F2212/608

    Abstract: An apparatus and method to reduce bandwidth and latency associated with probabilistic caches. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data, one or more of the cores to generate a request for a first cache line; a cache controller comprising cache lookup logic to determine a first way of a cache in which to search for the first cache line based on a first set of tag bits comprising one or more bits associated with the first cache line; the cache lookup logic to compare a second set of tag bits of the first cache line with a third set of tag bits of an existing cache line stored in the first way, wherein if the second set of tag bits and the third set of tag bits to not match, then the cache lookup logic to determine that the first cache line is not in the first way and to compare a fourth set of tag bits of the first cache line with a fifth set of tag bits of the existing cache line, wherein responsive to a match between the fourth set of tag bits and the fifth set of tag bits, the cache lookup logic to determine that the first cache line is stored in a second way and to responsively read the first cache line from the second way.

    REDUCING CONFLICTS IN DIRECT MAPPED CACHES
    6.
    发明申请

    公开(公告)号:US20190266087A1

    公开(公告)日:2019-08-29

    申请号:US16408870

    申请日:2019-05-10

    Abstract: A processor includes a core to execute a transaction with a memory via cache; and cache controller having an index mapper circuit to: identify a physical memory address associated with the transaction and having a plurality of bits; determine, based on the plurality of bits, a first set of bits encoding a tag value, a second set of bits encoding a page index value, and a third set of bits encoding a line index value; determine a mapping function corresponding to the tag value; determine, using the mapping function, a bit-placement order; combine, based on the order, second and third set of bits to form an index; generate, using the index, a mapping from the address to a cache line index value identifying a cache line in the cache; and wherein the cache controller is further to access, using the mapping and in response to the transaction, the cache line.

    Analyzing potential benefits of vectorization
    10.
    发明授权
    Analyzing potential benefits of vectorization 有权
    分析矢量化的潜在优势

    公开(公告)号:US09170789B2

    公开(公告)日:2015-10-27

    申请号:US13997140

    申请日:2013-03-05

    CPC classification number: G06F8/41 G06F8/456

    Abstract: Embodiments of computer-implemented methods, systems, computing devices, and computer-readable media (transitory and non-transitory) are described herein for analyzing execution of a plurality of executable instructions and, based on the analysis, providing an indication of a benefit to be obtained by vectorization of at least a subset of the plurality of executable instructions. In various embodiments, the analysis may include identification of the subset of the plurality of executable instructions suitable for conversion to one or more single-instruction multiple-data (“SIMD”) instructions.

    Abstract translation: 本文描述了计算机实现的方法,系统,计算设备和计算机可读介质(暂时性和非暂时性)的实施例,用于分析多个可执行指令的执行,并且基于该分析,提供对 可以通过对多个可执行指令的至少一个子集进行向量化来获得。 在各种实施例中,分析可以包括识别适合于转换成一个或多个单指令多数据(“SIMD”)指令的多个可执行指令的子集。

Patent Agency Ranking