METHOD AND APPARATUS FOR MEMORY ACCESS UNITS INTERACTION AND OPTIMIZED MEMORY SCHEDULING
    1.
    发明申请
    METHOD AND APPARATUS FOR MEMORY ACCESS UNITS INTERACTION AND OPTIMIZED MEMORY SCHEDULING 审中-公开
    用于存储器访问单元交互和优化记忆调度的方法和装置

    公开(公告)号:US20120144124A1

    公开(公告)日:2012-06-07

    申请号:US12962042

    申请日:2010-12-07

    CPC classification number: G06F12/0862

    Abstract: A method and an apparatus for modulating the prefetch training of a memory-side prefetch unit (MS-PFU) are described. An MS-PFU trains on memory access requests it receives from processors and their processor-side prefetch units (PS-PFUs). In the method and apparatus, an MS-PFU modulates its training based on one or more of a PS-PFU memory access request, a PS-PFU memory access request type, memory utilization, or the accuracy of MS-PFU prefetch requests.

    Abstract translation: 描述了用于调制存储器侧预取单元(MS-PFU)的预取训练的方法和装置。 MS-PFU对从处理器及其处理器端预取单元(PS-PFU)接收的存储器访问请求进行训练。 在该方法和装置中,MS-PFU基于PS-PFU存储器访问请求,PS-PFU存储器访问请求类型,存储器利用率或MS-PFU预取请求的准确性中的一个或多个来调制其训练。

    Processor power management and method
    2.
    发明授权
    Processor power management and method 有权
    处理器电源管理和方法

    公开(公告)号:US08195887B2

    公开(公告)日:2012-06-05

    申请号:US12356624

    申请日:2009-01-21

    Abstract: A data processing device is disclosed that includes multiple processing cores, where each core is associated with a corresponding cache. When a processing core is placed into a first sleep mode, the data processing device initiates a first phase. If any cache probes are received at the processing core during the first phase, the cache probes are serviced. At the end of the first phase, the cache corresponding to the processing core is flushed, and subsequent cache probes are not serviced at the cache. Because it does not service the subsequent cache probes, the processing core can therefore enter another sleep mode, allowing the data processing device to conserve additional power.

    Abstract translation: 公开了一种数据处理设备,其包括多个处理核心,其中每个核心与相应的高速缓存相关联。 当处理核心被置于第一睡眠模式时,数据处理设备启动第一阶段。 如果在第一阶段期间在处理核心处接收到任何高速缓存探测器,则对缓存探测器进行服务。 在第一阶段结束时,与处理核心相对应的高速缓冲存储器被刷新,并且后续高速缓存探测器不在缓存处被服务。 因为它不服务后续的缓存探测器,因此处理核心可以进入另一个睡眠模式,从而允许数据处理设备节省额外的功率。

    Blocking aggressive neighbors in a cache subsystem
    3.
    发明授权
    Blocking aggressive neighbors in a cache subsystem 有权
    阻止缓存子系统中的攻击性邻居

    公开(公告)号:US07603522B1

    公开(公告)日:2009-10-13

    申请号:US11432706

    申请日:2006-05-10

    CPC classification number: G06F12/126 G06F12/084 G06F12/128

    Abstract: A system and method for managing a cache subsystem. A system comprises a plurality of processing entities, a cache shared by the plurality of processing entities, and circuitry configured to manage allocations of data into the cache. Cache controller circuitry is configured to allocate data in the cache at a less favorable position in the replacement stack in response to determining a processing entity which corresponds to the allocated data has relatively poor cache behavior compared to other processing entities. The circuitry is configured to track a relative hit rate for each processing entity, such as a thread or processor core. A figure of merit may be determined for each processing entity which reflects how well a corresponding processing entity is behaving with respect to the cache. Processing entities which have a relatively low figure of merit may have their data allocated in the shared cache at a lower level in the cache replacement stack.

    Abstract translation: 一种用于管理缓存子系统的系统和方法。 系统包括多个处理实体,由多个处理实体共享的高速缓存器,以及被配置为管理数据到高速缓存中的分配的电路。 高速缓存控制器电路被配置为响应于确定与所分配的数据相对应的处理实体与其他处理实体相比具有相对较差的缓存行为,在替换栈中的不太有利的位置处在高速缓存中分配数据。 电路被配置为跟踪每个处理实体(例如线程或处理器核心)的相对命中率。 可以为反映相应处理实体相对于高速缓存行为的程度的每个处理实体确定品质因数。 具有相对较低品质因数的处理实体可以将数据分配在高速缓存替换堆栈中的较低级别的共享高速缓存中。

    Method of determining event based energy weights for digital power estimation
    5.
    发明授权
    Method of determining event based energy weights for digital power estimation 有权
    确定数字功率估计的基于事件的能量权重的方法

    公开(公告)号:US08484593B2

    公开(公告)日:2013-07-09

    申请号:US12838767

    申请日:2010-07-19

    CPC classification number: G06F17/5022 G06F2217/78

    Abstract: A method for determining event based energy weights for digital power estimation includes obtaining a reference energy value corresponding to a power consumed by at least a portion of an integrated circuit (IC) device during operation. The method includes determining and selecting a subset of signals from a set of all signals within the IC that correlates to energy use within the IC. The method includes determining an activity factor of each signal in the subset by monitoring each signal while simulating execution of a particular set of instructions. The method includes determining a weight factor or at least an approximation of a weight factor for each signal in the subset by solving within a predetermined accuracy, a multivariable equation in which the reference energy value equals a weighted sum of the activity of the signals of the selected subset multiplied by their respective weight factors.

    Abstract translation: 一种用于确定用于数字功率估计的基于事件的能量权重的方法包括获得与在操作期间由集成电路(IC)装置的至少一部分消耗的功率相对应的参考能量值。 该方法包括从IC内的与IC内的能量使用相关的一组全部信号中确定和选择信号子集。 该方法包括通过在模拟特定指令集的执行期间监视每个信号来确定子集中每个信号的活动因子。 该方法包括通过在预定精度内求解来确定子集中每个信号的权重因子或至少近似值,其中参考能量值等于信号的活动的加权和 选择的子集乘以它们各自的权重因子。

    SPLIT TRAFFIC ROUTING IN A PROCESSOR
    6.
    发明申请
    SPLIT TRAFFIC ROUTING IN A PROCESSOR 审中-公开
    分处交通运输路线

    公开(公告)号:US20120155273A1

    公开(公告)日:2012-06-21

    申请号:US12968857

    申请日:2010-12-15

    CPC classification number: G06F15/17312

    Abstract: A multi-chip module configuration includes two processors, each having two nodes, each node including multiple cores or compute units. Each node is connected to the other nodes by links that are high bandwidth or low bandwidth. Routing of traffic between the nodes is controlled at each node according to a routing table and/or a control register that optimize bandwidth usage and traffic congestion control.

    Abstract translation: 多芯片模块配置包括两个处理器,每个处理器具有两个节点,每个节点包括多个核心或计算单元。 每个节点通过高带宽或低带宽的链路连接到其他节点。 根据路由表和/或优化带宽使用和业务拥塞控制的控制寄存器,在每个节点处控制节点之间的业务路由。

    METHOD OF DETERMINING EVENT BASED ENERGY WEIGHTS FOR DIGITAL POWER ESTIMATION
    8.
    发明申请
    METHOD OF DETERMINING EVENT BASED ENERGY WEIGHTS FOR DIGITAL POWER ESTIMATION 有权
    确定用于数字功率估计的基于事件的能量权重的方法

    公开(公告)号:US20120017188A1

    公开(公告)日:2012-01-19

    申请号:US12838767

    申请日:2010-07-19

    CPC classification number: G06F17/5022 G06F2217/78

    Abstract: A method for determining event based energy weights for digital power estimation includes obtaining a reference energy value corresponding to a power consumed by at least a portion of an integrated circuit (IC) device during operation. The method includes determining and selecting a subset of signals from a set of all signals within the IC that correlates to energy use within the IC. The method includes determining an activity factor of each signal in the subset by monitoring each signal while simulating execution of a particular set of instructions. The method includes determining a weight factor or at least an approximation of a weight factor for each signal in the subset by solving within a predetermined accuracy, a multivariable equation in which the reference energy value equals a weighted sum of the activity of the signals of the selected subset multiplied by their respective weight factors.

    Abstract translation: 一种用于确定用于数字功率估计的基于事件的能量权重的方法包括获得与在操作期间由集成电路(IC)装置的至少一部分消耗的功率相对应的参考能量值。 该方法包括从IC内的与IC内的能量使用相关的一组全部信号中确定和选择信号子集。 该方法包括通过在模拟特定指令集的执行期间监视每个信号来确定子集中每个信号的活动因子。 该方法包括通过在预定精度内求解来确定子集中每个信号的权重因子或至少近似值,其中参考能量值等于信号的活动的加权和 选择的子集乘以它们各自的权重因子。

    PROCESSOR POWER MANAGEMENT AND METHOD
    9.
    发明申请
    PROCESSOR POWER MANAGEMENT AND METHOD 有权
    处理器功率管理和方法

    公开(公告)号:US20100185820A1

    公开(公告)日:2010-07-22

    申请号:US12356624

    申请日:2009-01-21

    Abstract: A data processing device is disclosed that includes multiple processing cores, where each core is associated with a corresponding cache. When a processing core is placed into a first sleep mode, the data processing device initiates a first phase. If any cache probes are received at the processing core during the first phase, the cache probes are serviced. At the end of the first phase, the cache corresponding to the processing core is flushed, and subsequent cache probes are not serviced at the cache. Because it does not service the subsequent cache probes, the processing core can therefore enter another sleep mode, allowing the data processing device to conserve additional power.

    Abstract translation: 公开了一种数据处理设备,其包括多个处理核心,其中每个核心与相应的高速缓存相关联。 当处理核心被置于第一睡眠模式时,数据处理设备启动第一阶段。 如果在第一阶段期间在处理核心处接收到任何高速缓存探测器,则对缓存探测器进行服务。 在第一阶段结束时,与处理核心相对应的高速缓冲存储器被刷新,并且后续高速缓存探测器不在缓存处被服务。 因为它不服务后续的缓存探测器,因此处理核心可以进入另一个睡眠模式,从而允许数据处理设备节省额外的功率。

    Mostly exclusive shared cache management policies
    10.
    发明授权
    Mostly exclusive shared cache management policies 有权
    大多数独家共享缓存管理策略

    公开(公告)号:US07640399B1

    公开(公告)日:2009-12-29

    申请号:US11432707

    申请日:2006-05-10

    CPC classification number: G06F12/0811

    Abstract: A system and method for managing a memory system. A system includes a plurality of processing entities and a cache which is shared by the processing entities. Responsive to a replacement event, circuitry may identify data entries of the shared cache which are candidates for replacement. Data entries which have been identified as candidates for replacement may be removed as candidates for replacement in response to detecting the data entry corresponds to data which is shared by at least two of the plurality of processing entities. The circuitry may maintain an indication as to which of the processing entities caused an initial allocation of data into the shared cache. When the circuitry detects that a particular data item is accessed by a processing entity other than a processing entity which caused an allocation of the given data item, the data item may be deemed classified as shared data.

    Abstract translation: 一种用于管理存储器系统的系统和方法。 系统包括多个处理实体和由处理实体共享的高速缓存。 响应于替换事件,电路可以识别作为替换候选的共享缓存的数据条目。 已经被识别为替换候选的数据条目可以作为响应于检测到数据条目的替换候选而被去除,对应于由多个处理实体中的至少两个共享的数据。 电路可以保持关于哪个处理实体将数据初始分配到共享高速缓存中的指示。 当电路检测到特定数据项被除了导致给定数据项的分配的处理实体之外的处理实体访问时,数据项可被认为被分类为共享数据。

Patent Agency Ranking