STANDALONE SOFTWARE PERFORMANCE OPTIMIZER SYSTEM FOR HYBRID SYSTEMS
    3.
    发明申请
    STANDALONE SOFTWARE PERFORMANCE OPTIMIZER SYSTEM FOR HYBRID SYSTEMS 有权
    用于混合系统的STANDALONE软件性能优化系统

    公开(公告)号:US20100275206A1

    公开(公告)日:2010-10-28

    申请号:US12427746

    申请日:2009-04-22

    IPC分类号: G06F9/46 G06F15/00

    摘要: Standalone software performance optimizer systems for hybrid systems include a hybrid system having a plurality of processors, memory operably connected to the processors, an operating system including a dispatcher loaded into the memory, a multithreaded application read into the memory, and a static performance analysis program loaded into the memory; wherein the static performance analysis program instructs at least one processor to perform static performance analysis on each of the threads, the static performance analysis program instructs at least one processor to assign each thread to a CPU class based on the static performance analysis, and the static performance analysis program instructs at least one processor to store each thread's CPU class. An embodiment of the invention may also include the dispatcher optimally mapping threads to processors using thread CPU classes and remapping threads to processors when a runtime performance analysis classifies a thread differently from the static performance analysis.

    摘要翻译: 用于混合系统的独立软件性能优化器系统包括具有多个处理器的混合系统,可操作地连接到处理器的存储器,包括加载到存储器中的调度器,读入存储器的多线程应用的操作系统和静态性能分析程序 加载到内存中; 其中所述静态性能分析程序指示至少一个处理器对每个所述线程执行静态性能分析,所述静态性能分析程序指示至少一个处理器基于所述静态性能分析将每个线程分配给CPU类,并且所述静态 性能分析程序指示至少一个处理器存储每个线程的CPU类。 本发明的实施例还可以包括调度器,当运行时性能分析将线程与静态性能分析不同的方式进行分类时,线程CPU类将线程最优地映射到处理器并将线程重新映射到处理器。

    System and method of arbitrating access of threads to shared resources within a data processing system
    4.
    发明申请
    System and method of arbitrating access of threads to shared resources within a data processing system 有权
    线程访问数据处理系统内共享资源的系统和方法

    公开(公告)号:US20070101333A1

    公开(公告)日:2007-05-03

    申请号:US11260611

    申请日:2005-10-27

    IPC分类号: G06F9/46

    CPC分类号: G06F9/526 G06F9/485

    摘要: A first collection of threads which represent a collection of tasks to be executed by at least one of a collection of processing units is monitored. In response to detecting a request by a thread among the first collection of threads to access a shared resource locked by a second thread among the collection of threads, the first thread attempts to access a list associated with the shared resource. The list orders at least one thread among the collection of threads by priority of access to the shared resource. In response to determining the list is locked by a third thread among the collection of threads, the first thread is placed into a sleep state to be reawakened in a fixed period of time. In response to determining that at least one of the collection of processing units has entered into an idle state, the first thread is awakened from the sleep state before the fixed period of time has expired. Also, in response to awakening the first thread from the sleep state, the first thread is assigned to at least one of the collection of processing units and the first thread retries its attempt to access the list.

    摘要翻译: 监视表示要由处理单元的集合中的至少一个执行的任务的集合的线程的第一集合。 响应于在线程的第一集合中检测到线程的请求以访问由线程集合中的第二线程锁定的共享资源的请求,第一线程尝试访问与共享资源相关联的列表。 该列表通过访问共享资源的优先级来排序线程集合中的至少一个线程。 响应于确定列表被线程集合中的第三线程锁定,第一线程被置于休眠状态以在固定的时间段内被重新唤醒。 响应于确定处理单元的集合中的至少一个已经进入空闲状态,在固定时间段到期之前,第一线程从睡眠状态唤醒。 而且,响应于将第一线程从睡眠状态唤醒,第一线程被分配给处理单元集合中的​​至少一个,并且第一线程重试其访问列表的尝试。

    Weighted LRU for associative caches
    5.
    发明申请
    Weighted LRU for associative caches 审中-公开
    关联高速缓存的加权LRU

    公开(公告)号:US20060282620A1

    公开(公告)日:2006-12-14

    申请号:US11152557

    申请日:2005-06-14

    IPC分类号: G06F12/00

    CPC分类号: G06F12/128

    摘要: The present invention provides a method, system, and apparatus for communicating to associative cache which data is least important to keep. The method, system, and apparatus determine which cache line has the least important data so that this less important data is replaced before more important data. In a preferred embodiment, the method begins by determining the weight of each cache line within the cache. Then the cache line or lines with the lowest weight is determined.

    摘要翻译: 本发明提供了一种用于向相关缓存进行通信的方法,系统和装置,该数据对于保持最不重要。 方法,系统和装置确定哪个高速缓存行具有最不重要的数据,以便在更重要的数据之前替换该较不重要的数据。 在优选实施例中,该方法通过确定高速缓存中每个高速缓存线的权重开始。 然后确定具有最低权重的高速缓存行或行。

    Method and apparatus for establishing a cache footprint for shared processor logical partitions
    6.
    发明申请
    Method and apparatus for establishing a cache footprint for shared processor logical partitions 审中-公开
    用于建立共享处理器逻辑分区的缓存占用的方法和装置

    公开(公告)号:US20070033371A1

    公开(公告)日:2007-02-08

    申请号:US11197616

    申请日:2005-08-04

    IPC分类号: G06F12/00

    摘要: A computer implemented method, apparatus, and computer usable code for managing cache information in a logical partitioned data processing system. A determination is made as to whether a unique identifier in a tag associated with a cache entry in a cache matches a previous unique identifier for a currently executing partition in the logical partitioned data processing system when the cache entry is selected for removal from the cache, and saves the tag in a storage device if the partition identifier in the tag matches the previous unique identifier.

    摘要翻译: 一种用于在逻辑分区数据处理系统中管理高速缓存信息的计算机实现的方法,装置和计算机可用代码。 当选择高速缓存条目以从高速缓存中移除时,确定与高速缓存条目相关联的标签中的唯一标识符与高速缓存条目中的高速缓存条目中的唯一标识符是否匹配在逻辑分区数据处理系统中的当前执行分区的先前唯一标识符, 并且如果标签中的分区标识符与先前的唯一标识符匹配,则将标签保存在存储设备中。

    System and Method for Dynamically Adjusting Read Ahead Values Based Upon Memory Usage
    7.
    发明申请
    System and Method for Dynamically Adjusting Read Ahead Values Based Upon Memory Usage 失效
    基于内存使用动态调整读取前值的系统和方法

    公开(公告)号:US20060288186A1

    公开(公告)日:2006-12-21

    申请号:US11463100

    申请日:2006-08-08

    IPC分类号: G06F12/00

    CPC分类号: G06F12/023

    摘要: A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).

    摘要翻译: 提供了一种基于当前系统内存条件动态更改虚拟内存管理器(VMM)顺序访问预读设置的系统和方法。 使用用户设置的顺序访问读取前值可以执行正常的VMM操作。 当检测到低内存时,系统会根据自由空间量是否很低或已经达到极低的水平,关闭顺序访问预读操作或者减小最大页面前提(maxpgahead)值。 改变的VMM顺序访问预读状态在有足够的可用空间可用之前保持有效,以便可以执行正常的VMM顺序访问预读操作(此时,改变的顺序访问读取前置值被重置为其原始级别) 。

    Method and apparatus for aging data in a cache
    8.
    发明申请
    Method and apparatus for aging data in a cache 有权
    用于在缓存中老化数据的方法和装置

    公开(公告)号:US20070038809A1

    公开(公告)日:2007-02-15

    申请号:US11201642

    申请日:2005-08-11

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0897 G06F12/0891

    摘要: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.

    摘要翻译: 计算机实现的方法,装置和用于管理高速缓存数据的计算机可用代码。 分区标识符与高速缓存中的高速缓存条目相关联,其中分区标识符标识访问高速缓存条目的最后一个分区。 与高速缓存条目相关联的分区标识符与位于处理器寄存器中的先前分区标识符进行比较,以响应于高速缓存条目相对于高速缓存移动到较低级高速缓存。 如果与高速缓存条目相关联的分区标识与位于处理器寄存器中的先前分区标识符相匹配以形成标记的高速缓存条目,则标记高速缓存条目,其中标记的高速缓存条目相对于未标记的高速缓存条目以较慢的速率进行老化。

    Efficient memory update process for on-the-fly instruction translation for well behaved applications executing on a weakly-ordered processor
    9.
    发明申请
    Efficient memory update process for on-the-fly instruction translation for well behaved applications executing on a weakly-ordered processor 失效
    高效的内存更新过程,用于在弱有序处理器上执行的运行良好的应用程序的即时指令转换

    公开(公告)号:US20060155936A1

    公开(公告)日:2006-07-13

    申请号:US11006371

    申请日:2004-12-07

    IPC分类号: G06F13/00

    摘要: A multiprocessor data processing system (MDPS) with a weakly-ordered architecture providing processing logic for substantially eliminating issuing sync instructions after every store instruction of a well-behaved application. Instructions of a well-behaved application are translated and executed by a weakly-ordered processor. The processing logic includes a lock address tracking utility (LATU), which provides an algorithm and a table of lock addresses, within which each lock address is stored when the lock is acquired by the weakly-ordered processor. When a store instruction is encountered in the instruction stream, the LATU compares the target address of the store instruction against the table of lock addresses. If the target address matches one of the lock addresses, indicating that the store instruction is the corresponding unlock instruction (or lock release instruction), a sync instruction is issued ahead of the store operation. The sync causes all values updated by the intermediate store operations to be flushed out to the point of coherency and be visible to all processors.

    摘要翻译: 具有弱有序体系结构的多处理器数据处理系统(MDPS)提供处理逻辑,用于在运行良好的应用的每个存储指令之后基本上消除发出同步指令。 良好的应用程序的指令由弱有序的处理器转换和执行。 处理逻辑包括一个锁定地址跟踪实用程序(LATU),它提供一种算法和一个锁定地址表,当弱锁定处理器获取锁时,锁存地址被存储在该地址中。 当在指令流中遇到存储指令时,LATU将存储指令的目标地址与锁定地址表进行比较。 如果目标地址与其中一个锁定地址匹配,指示存储指令是相应的解锁指令(或锁定释放指令),则在存储操作之前发出同步指令。 同步使得由中间存储操作更新的所有值被刷新到一致性点,并且对于所有处理器可见。

    System and method for dynamically adjusting read ahead values based upon memory usage
    10.
    发明申请
    System and method for dynamically adjusting read ahead values based upon memory usage 失效
    基于内存使用动态调整预读值的系统和方法

    公开(公告)号:US20050235125A1

    公开(公告)日:2005-10-20

    申请号:US10828455

    申请日:2004-04-20

    IPC分类号: G06F12/00 G06F12/02 G06F12/08

    CPC分类号: G06F12/023

    摘要: A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).

    摘要翻译: 提供了一种基于当前系统内存条件动态更改虚拟内存管理器(VMM)顺序访问预读设置的系统和方法。 使用用户设置的顺序访问读取前值可以执行正常的VMM操作。 当检测到低内存时,系统会根据自由空间量是否很低或已经达到极低的水平,关闭顺序访问预读操作或者减小最大页面前提(maxpgahead)值。 改变的VMM顺序访问预读状态在有足够的可用空间可用之前保持有效,以便可以执行正常的VMM顺序访问预读操作(此时,改变的顺序访问读取前置值被重置为其原始级别) 。