Instruction set architecture extensions for performing power versus performance tradeoffs
    3.
    发明授权
    Instruction set architecture extensions for performing power versus performance tradeoffs 失效
    用于执行功率与性能折衷的指令集架构扩展

    公开(公告)号:US08589665B2

    公开(公告)日:2013-11-19

    申请号:US12788940

    申请日:2010-05-27

    IPC分类号: G06F9/00

    摘要: Mechanisms are provided for processing an instruction in a processor of a data processing system. The mechanisms operate to receive, in a processor of the data processing system, an instruction, the instruction including power/performance tradeoff information associated with the instruction. The mechanisms further operate to determine power/performance tradeoff priorities or criteria, specifying whether power conservation or performance is prioritized with regard to execution of the instruction, based on the power/performance tradeoff information. Moreover, the mechanisms process the instruction in accordance with the power/performance tradeoff priorities or criteria identified based on the power/performance tradeoff information of the instruction.

    摘要翻译: 提供了用于处理数据处理系统的处理器中的指令的机制。 这些机制操作以在数据处理系统的处理器中接收指令,该指令包括与指令相关联的功率/性能权衡信息。 这些机制进一步操作以基于功率/性能折衷信息来确定功率/性能折衷优先级或标准,指定功率节省或关于指令的执行是否优先的性能。 此外,机构根据功率/性能折衷优先级或基于指令的功率/性能折衷信息识别的标准处理指令。

    Instruction Set Architecture Extensions for Performing Power Versus Performance Tradeoffs
    4.
    发明申请
    Instruction Set Architecture Extensions for Performing Power Versus Performance Tradeoffs 失效
    用于执行电力与性能权衡的指令集架构扩展

    公开(公告)号:US20110296149A1

    公开(公告)日:2011-12-01

    申请号:US12788940

    申请日:2010-05-27

    IPC分类号: G06F9/318

    摘要: Mechanisms are provided for processing an instruction in a processor of a data processing system. The mechanisms operate to receive, in a processor of the data processing system, an instruction, the instruction including power/performance tradeoff information associated with the instruction. The mechanisms further operate to determine power/performance tradeoff priorities or criteria, specifying whether power conservation or performance is prioritized with regard to execution of the instruction, based on the power/performance tradeoff information. Moreover, the mechanisms process the instruction in accordance with the power/performance tradeoff priorities or criteria identified based on the power/performance tradeoff information of the instruction.

    摘要翻译: 提供了用于处理数据处理系统的处理器中的指令的机制。 这些机制操作以在数据处理系统的处理器中接收指令,该指令包括与指令相关联的功率/性能权衡信息。 这些机制进一步操作以基于功率/性能折衷信息来确定功率/性能折衷优先级或标准,指定功率节省或关于指令的执行是否优先的性能。 此外,机构根据功率/性能折衷优先级或基于指令的功率/性能折衷信息识别的标准处理指令。

    Read and write aware cache with a read portion and a write portion of a tag and status array
    5.
    发明授权
    Read and write aware cache with a read portion and a write portion of a tag and status array 有权
    具有读取部分和标签和状态数组的写入部分的读写感知高速缓存

    公开(公告)号:US08843705B2

    公开(公告)日:2014-09-23

    申请号:US13572916

    申请日:2012-08-13

    IPC分类号: G06F12/08

    摘要: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.

    摘要翻译: 在缓存中提供了一种机制,用于提供读写感知高速缓存。 该机制将大型缓存分区分为常读区域和经常写区域。 该机制将读/写频率视为非均匀缓存架构替换策略。 经常写入的高速缓存行放置在更远的存储区之一中。 经常读取的高速缓存行被放置在其中一个较近的存储体中。 常读区域和经常写区域之间的大小比可以是静态的或动态的。 经常读区域和经常写区域之间的边界可能是不同的或模糊的。

    Techniques for dynamically sharing a fabric to facilitate off-chip communication for multiple on-chip units
    6.
    发明授权
    Techniques for dynamically sharing a fabric to facilitate off-chip communication for multiple on-chip units 失效
    用于动态共享结构以促进多个片上单元的片外通信的技术

    公开(公告)号:US08346988B2

    公开(公告)日:2013-01-01

    申请号:US12786716

    申请日:2010-05-25

    IPC分类号: G06F3/00 G06F13/00

    摘要: A technique for sharing a fabric to facilitate off-chip communication for on-chip units includes dynamically assigning a first unit that implements a first communication protocol to a first portion of the fabric when private fabrics are indicated for the on-chip units. The technique also includes dynamically assigning a second unit that implements a second communication protocol to a second portion of the fabric when the private fabrics are indicated for the on-chip units. In this case, the first and second units are integrated in a same chip and the first and second protocols are different. The technique further includes dynamically assigning, based on off-chip traffic requirements of the first and second units, the first unit or the second unit to the first and second portions of the fabric when the private fabrics are not indicated for the on-chip units.

    摘要翻译: 一种用于共享一个结构以促进片上单元的片外通信的技术包括:当针对片上单元指示专用结构时,动态分配实现第一通信协议的第一单元到该结构的第一部分。 该技术还包括当为片上单元指示专用结构时,动态地将实现第二通信协议的第二单元分配给该结构的第二部分。 在这种情况下,第一和第二单元集成在相同的芯片中,并且第一和第二协议是不同的。 该技术还包括:当私有结构未被指示用于片上单元时,基于第一单元或第二单元的片外流量要求将第一单元或第二单元动态地分配给该结构的第一和第二部分 。

    Read and write aware cache storing cache lines in a read-often portion and a write-often portion
    7.
    发明授权
    Read and write aware cache storing cache lines in a read-often portion and a write-often portion 失效
    将读写缓存存储在经常阅读的部分和经常写入的部分中

    公开(公告)号:US08271729B2

    公开(公告)日:2012-09-18

    申请号:US12562242

    申请日:2009-09-18

    IPC分类号: G06F12/00

    摘要: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.

    摘要翻译: 在缓存中提供了一种机制,用于提供读写感知高速缓存。 该机制将大型缓存分区分为常读区域和经常写区域。 该机制将读/写频率视为非均匀缓存架构替换策略。 经常写入的高速缓存行放置在更远的存储区之一中。 经常读取的高速缓存行被放置在其中一个较近的存储体中。 常读区域和经常写区域之间的大小比可以是静态的或动态的。 经常读区域和经常写区域之间的边界可能是不同的或模糊的。

    Read and Write Aware Cache
    8.
    发明申请
    Read and Write Aware Cache 失效
    读写写入缓存

    公开(公告)号:US20110072214A1

    公开(公告)日:2011-03-24

    申请号:US12562242

    申请日:2009-09-18

    IPC分类号: G06F12/08 G06F12/00

    摘要: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.

    摘要翻译: 在缓存中提供了一种机制,用于提供读写感知高速缓存。 该机制将大型缓存分区分为常读区域和经常写区域。 该机制将读/写频率视为非均匀缓存架构替换策略。 经常写入的高速缓存行放置在更远的存储区之一中。 经常读取的高速缓存行被放置在其中一个较近的存储体中。 常读区域和经常写区域之间的大小比可以是静态的或动态的。 经常读区域和经常写区域之间的边界可能是不同的或模糊的。

    Reducing Energy Consumption of Set Associative Caches by Reducing Checked Ways of the Set Association
    9.
    发明申请
    Reducing Energy Consumption of Set Associative Caches by Reducing Checked Ways of the Set Association 失效
    通过减少集合关联检查方式降低设置关联缓存的能耗

    公开(公告)号:US20110296112A1

    公开(公告)日:2011-12-01

    申请号:US12787122

    申请日:2010-05-25

    IPC分类号: G06F12/08 G06F12/00

    摘要: Mechanisms for accessing a set associative cache of a data processing system are provided. A set of cache lines, in the set associative cache, associated with an address of a request are identified. Based on a determined mode of operation for the set, the following may be performed: determining if a cache hit occurs in a preferred cache line without accessing other cache lines in the set of cache lines; retrieving data from the preferred cache line without accessing the other cache lines in the set of cache lines, if it is determined that there is a cache hit in the preferred cache line; and accessing each of the other cache lines in the set of cache lines to determine if there is a cache hit in any of these other cache lines only in response to there being a cache miss in the preferred cache line(s).

    摘要翻译: 提供了访问数据处理系统的集合关联缓存的机制。 识别与集合关联高速缓存中的与请求的地址相关联的一组高速缓存行。 基于针对集合的确定的操作模式,可以执行以下操作:确定高速缓存命中是否发生在优选高速缓存行中,而不访问该组高速缓存行中的其他高速缓存行; 如果确定在所述优选高速缓存行中存在高速缓存命中,则从所述优选高速缓存行中检索数据而不访问所述一组高速缓存行中的其它高速缓存行; 以及访问该组高速缓存行中的每个其它高速缓存行,以仅在响应于优选高速缓存行中存在高速缓存未命中时确定在这些其它高速缓存行中的任何一个中是否存在高速缓存命中。