Multiple cycle search content addressable memory

    公开(公告)号:US10068645B2

    公开(公告)日:2018-09-04

    申请号:US15369823

    申请日:2016-12-05

    Abstract: In an aspect of the disclosure, a method and an apparatus are provided. The apparatus may be a content addressable memory. The content addressable memory includes a plurality of memory sections each configured to store data. Additionally, the content addressable memory includes a comparator configured to compare the stored data in each of the plurality of memory sections with search input data. The comparison may be in a time division multiplexed fashion. The comparator may be configured to compare the stored data in each of the plurality of memory sections with search input data in a corresponding one of a plurality of memory access cycles. The content addressable memory may include a state machine configured to control when the comparator compares the stored data in each of the plurality of memory sections with search input data based on a state of the state machine.

    RECONFIGURABLE FETCH PIPELINE
    12.
    发明申请
    RECONFIGURABLE FETCH PIPELINE 有权
    可重构FETCH管道

    公开(公告)号:US20150347308A1

    公开(公告)日:2015-12-03

    申请号:US14287331

    申请日:2014-05-27

    Abstract: A particular method includes selecting between a first cache access mode and a second cache access mode based on a number of instructions stored at an issue queue, a number of active threads of an execution unit, or both. The method further includes performing a first cache access. When the first cache access mode is selected, performing the first cache access includes performing a tag access and performing a data array access after performing the tag access. When the second cache access mode is selected, performing the first cache access includes performing the tag access in parallel with the data array access.

    Abstract translation: 一种特定的方法包括基于存储在发布队列中的指令数量,执行单元的活动线程数量,或两者来选择第一高速缓存访​​问模式和第二高速缓存访​​问模式。 该方法还包括执行第一高速缓存访​​问。 当选择第一高速缓存访​​问模式时,执行第一高速缓存访​​问包括在执行标签访问之后执行标签访问并执行数据阵列访问。 当选择第二高速缓存访​​问模式时,执行第一高速缓存访​​问包括与数据阵列访问并行地执行标签访问。

    LATENCY-BASED POWER MODE UNITS FOR CONTROLLING POWER MODES OF PROCESSOR CORES, AND RELATED METHODS AND SYSTEMS
    13.
    发明申请
    LATENCY-BASED POWER MODE UNITS FOR CONTROLLING POWER MODES OF PROCESSOR CORES, AND RELATED METHODS AND SYSTEMS 有权
    用于控制处理器电源模式的基于延迟的功率模式单元及相关方法和系统

    公开(公告)号:US20150301573A1

    公开(公告)日:2015-10-22

    申请号:US14258541

    申请日:2014-04-22

    Abstract: Latency-based power mode units for controlling power modes of processor cores, and related methods and systems are disclosed. In one aspect, the power mode units are configured to reduce power provided to the processor core when the processor core has one or more threads in pending status and no threads in active status. An operand of an instruction being processed by a thread may be data in memory located outside processor core. If the processor core does not require as much power to operate while a thread waits for a request from outside the processor core, the power consumed by the processor core can be reduced during these waiting periods. Power can be conserved in the processor core even when threads are being processed if the only threads being processed are in pending status, and can reduce the overall power consumption in the processor core and its corresponding CPU.

    Abstract translation: 公开了用于控制处理器核心的功率模式的基于延迟的功率模式单元以及相关方法和系统。 在一个方面,功率模式单元被配置为当处理器核心具有一个或多个处于挂起状态的线程并且没有处于活动状态的线程时,降低提供给处理器核心的功率。 由线程处理的指令的操作数可以是位于处理器核心外部的存储器中的数据。 如果在线程等待来自处理器核心的请求的情况下,处理器核心不需要足够的功率来操作,则在这些等待期间可以减少处理器核心消耗的功率。 即使当正在处理的线程处于待处理状态时,即使在处理线程时处理器内核中的电源也可以节省,并且可以降低处理器内核及其对应CPU的总体功耗。

    Latency-based power mode units for controlling power modes of processor cores, and related methods and systems
    18.
    发明授权
    Latency-based power mode units for controlling power modes of processor cores, and related methods and systems 有权
    用于控制处理器核心功率模式的基于延迟的功率模式单元,以及相关方法和系统

    公开(公告)号:US09552033B2

    公开(公告)日:2017-01-24

    申请号:US14258541

    申请日:2014-04-22

    Abstract: Latency-based power mode units for controlling power modes of processor cores, and related methods and systems are disclosed. In one aspect, the power mode units are configured to reduce power provided to the processor core when the processor core has one or more threads in pending status and no threads in active status. An operand of an instruction being processed by a thread may be data in memory located outside processor core. If the processor core does not require as much power to operate while a thread waits for a request from outside the processor core, the power consumed by the processor core can be reduced during these waiting periods. Power can be conserved in the processor core even when threads are being processed if the only threads being processed are in pending status, and can reduce the overall power consumption in the processor core and its corresponding CPU.

    Abstract translation: 公开了用于控制处理器核心的功率模式的基于延迟的功率模式单元以及相关方法和系统。 在一个方面,功率模式单元被配置为当处理器核心具有一个或多个处于挂起状态的线程并且没有处于活动状态的线程时,降低提供给处理器核心的功率。 由线程处理的指令的操作数可以是位于处理器核心外部的存储器中的数据。 如果在线程等待来自处理器核心的请求的情况下,处理器核心不需要足够的功率来操作,则在这些等待期间可以减少处理器核心消耗的功率。 即使当正在处理的线程处于待处理状态时,即使在处理线程时处理器内核中的电源也可以节省,并且可以降低处理器内核及其对应CPU的总体功耗。

    Reconfigurable fetch pipeline
    19.
    发明授权
    Reconfigurable fetch pipeline 有权
    可重构抓取管道

    公开(公告)号:US09529727B2

    公开(公告)日:2016-12-27

    申请号:US14287331

    申请日:2014-05-27

    Abstract: A particular method includes selecting between a first cache access mode and a second cache access mode based on a number of instructions stored at an issue queue, a number of active threads of an execution unit, or both. The method further includes performing a first cache access. When the first cache access mode is selected, performing the first cache access includes performing a tag access and performing a data array access after performing the tag access. When the second cache access mode is selected, performing the first cache access includes performing the tag access in parallel with the data array access.

    Abstract translation: 一种特定的方法包括基于存储在发布队列中的指令数量,执行单元的活动线程数量,或两者来选择第一高速缓存访​​问模式和第二高速缓存访​​问模式。 该方法还包括执行第一高速缓存访​​问。 当选择第一高速缓存访​​问模式时,执行第一高速缓存访​​问包括在执行标签访问之后执行标签访问并执行数据阵列访问。 当选择第二高速缓存访​​问模式时,执行第一高速缓存访​​问包括与数据阵列访问并行地执行标签访问。

    MULTIPLE CLUSTERED VERY LONG INSTRUCTION WORD PROCESSING CORE
    20.
    发明申请
    MULTIPLE CLUSTERED VERY LONG INSTRUCTION WORD PROCESSING CORE 有权
    多个集成的非常长的指令字处理核心

    公开(公告)号:US20160062770A1

    公开(公告)日:2016-03-03

    申请号:US14473947

    申请日:2014-08-29

    CPC classification number: G06F9/3885 G06F9/3851 G06F9/3853 G06F9/3891

    Abstract: A method includes identifying, at a scheduling unit, a resource conflict at a shared processing resource that is accessible by a first processing cluster and by a second processing cluster, where the first processing cluster, the second processing cluster, and the shared processing resource are included in a very long instruction word (VLIW) processing unit. The method also includes resolving the resource conflict.

    Abstract translation: 一种方法包括在调度单元处识别由第一处理集群和第二处理集群可访问的共享处理资源处的资源冲突,其中第一处理集群,第二处理集群和共享处理资源是 包括在一个很长的指令字(VLIW)处理单元中。 该方法还包括解决资源冲突。

Patent Agency Ranking