Performance of emerging applications in a virtualized environment using transient instruction streams
    1.
    发明授权
    Performance of emerging applications in a virtualized environment using transient instruction streams 有权
    使用瞬态指令流在虚拟化环境中的新兴应用程序的性能

    公开(公告)号:US09323527B2

    公开(公告)日:2016-04-26

    申请号:US12905208

    申请日:2010-10-15

    CPC classification number: G06F9/30054 G06F9/30185 G06F9/3802

    Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.

    Abstract translation: 公开了用于管理瞬时指令流的方法,系统和计算机可用介质。 在已知很少执行的分支和链路(BRL)指令中定义了瞬态标志。 在执行指令请求线程的硬件(例如,核心)的专用寄存器(SPR)中同样设置一个位。 请求线程中的后续提取或预取将被视为暂时的,并且不会写入低级缓存。 如果指令是非瞬态的,并且如果低级缓存不包括L1指令高速缓存,则从存储器获得的获取或预取缺失可以被写入L1和下级高速缓存中。 如果不包括在内,则可以将低速缓存中的L1指令高速缓存中的退出写入。

    MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION
    2.
    发明申请
    MIXED OPERATING PERFORMANCE MODE LPAR CONFIGURATION 有权
    混合操作性能模式LPAR配置

    公开(公告)号:US20110161979A1

    公开(公告)日:2011-06-30

    申请号:US12650909

    申请日:2009-12-31

    CPC classification number: G06F9/5077

    Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.

    Abstract translation: 实现功能以确定系统的多个多核处理单元根据多个操作性能模式来配置。 确定多个操作性能模式中的第一个满足与系统的第一逻辑分区的第一工作负载相对应的第一性能标准。 因此,第一逻辑分区与根据第一操作性能模式配置的多个多核处理单元的第一组相关联。 确定多个操作性能模式中的第二个满足与系统的第二逻辑分区的第二工作负载相对应的第二性能标准。 因此,第二逻辑分区与根据第二操作性能模式配置的多个多核处理单元的第二组相关联。

    Token swapping for hot spot management
    4.
    发明申请
    Token swapping for hot spot management 有权
    令牌交换热点管理

    公开(公告)号:US20050138254A1

    公开(公告)日:2005-06-23

    申请号:US10738722

    申请日:2003-12-17

    CPC classification number: G06F13/37

    Abstract: A method and apparatus are provided for efficiently managing hot spots in a resource managed computer system. The system utilizes a controller, a series of requester groups, and a series of loan registers. The controller is configured to allocate and is configured to reallocate resources among the requestor groups to efficiently manage the computer system. The loan registers account for reallocated resources such that intended preallocation of use of shared resources is closely maintained. Hence, the computer system is able to operate efficiently while preventing any single requestor or group of requesters from monopolizing shared resources.

    Abstract translation: 提供了一种用于在资源管理的计算机系统中有效地管理热点的方法和装置。 该系统利用控制器,一系列请求组和一系列贷款登记。 控制器被配置为分配并被配置为在请求者组之间重新分配资源以有效地管理计算机系统。 贷款登记册用于重新分配资源,以便密切维护预期分配使用共享资源。 因此,计算机系统能够有效地运行,同时防止任何单个请求者或一组请求者垄断共享资源。

    Flexible use of extended cache using a partition cache footprint
    6.
    发明授权
    Flexible use of extended cache using a partition cache footprint 失效
    灵活使用扩展缓存使用分区缓存占用空间

    公开(公告)号:US08438338B2

    公开(公告)日:2013-05-07

    申请号:US12856682

    申请日:2010-08-15

    CPC classification number: G06F12/0811 G06F12/0284 G06F2212/502 G06F2212/604

    Abstract: An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.

    Abstract translation: 提供了一种方法来识别对应于在计算机系统上运行的不同分区的高速缓存扩展大小。 该方法利用来自系统存储器区域的第一存储器分配来扩展与包括在处理器的硅衬底中的第一处理核心相关联的第一硬件高速缓存,系统存储器区域在硅衬底外部,并且第一存储器分配对应于 多个缓存扩展大小中的一个对应于在计算机系统上运行的分区之一。 该方法进一步扩展与第二处理核心相关联的第二硬件高速缓存,该第二处理核心还包括在处理器的硅衬底中,具有来自系统存储区域的第二存储器分配,其中第二存储器分配对应于对应于不同分区的另一个高速缓存扩展大小 正在由第二处理核心执行。

    Two partition accelerator and application of tiered flash to cache hierarchy in partition acceleration
    7.
    发明授权
    Two partition accelerator and application of tiered flash to cache hierarchy in partition acceleration 失效
    两个分区加速器和分层闪存的应用在分区加速中缓存层次结构

    公开(公告)号:US08417889B2

    公开(公告)日:2013-04-09

    申请号:US12508621

    申请日:2009-07-24

    CPC classification number: G06F12/0811 G06F2212/1024 G06F2212/1032

    Abstract: An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.

    Abstract translation: 提供了一种用于从包括在处理节点中的一组处理核心识别禁用的处理核心和活动处理核心的方法。 每个处理核心被分配一个高速缓冲存储器。 该方法扩展了分配给活动处理核心的高速缓存存储器的存储器映射,以包括分配给禁用处理核心的高速缓存存储器。 由第一进程使用的第一数据量由活动处理核存储到分配给活动处理核的高速缓冲存储器。 第二数据量由活动处理核心使用扩展存储器映射存储到分配给非活动处理核心的缓存存储器。

    Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams
    8.
    发明申请
    Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams 审中-公开
    使用瞬态指令流在虚拟化环境中新兴应用的性能

    公开(公告)号:US20120179873A1

    公开(公告)日:2012-07-12

    申请号:US13427083

    申请日:2012-03-22

    CPC classification number: G06F9/30054 G06F9/30185 G06F9/3802

    Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.

    Abstract translation: 公开了用于管理瞬时指令流的方法,系统和计算机可用介质。 在已知很少执行的分支和链路(BRL)指令中定义了瞬态标志。 在执行指令请求线程的硬件(例如,核心)的专用寄存器(SPR)中同样设置一个位。 请求线程中的后续提取或预取将被视为暂时的,并且不会写入低级缓存。 如果指令是非瞬态的,并且如果低级缓存不包括L1指令高速缓存,则从存储器获得的获取或预取缺失可以写入L1和低级高速缓存中。 如果不包括在内,则可以将低速缓存中的L1指令高速缓存中的退出写入。

    Mechanisms for Priority Control in Resource Allocation
    9.
    发明申请
    Mechanisms for Priority Control in Resource Allocation 有权
    资源配置优先控制机制

    公开(公告)号:US20100146512A1

    公开(公告)日:2010-06-10

    申请号:US12631407

    申请日:2009-12-04

    CPC classification number: G06F13/362

    Abstract: Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.

    Abstract translation: 提供资源分配优先控制机制。 利用这些机制,当单元向令牌管理器发出请求时,该单元识别其请求的优先级以及它希望访问的资源和单元的资源访问组(RAG)。 该信息用于设置与请求中标识的资源,优先级和RAG相关联的存储设备的值。 当令牌管理器生成并向RAG授予令牌时,根据在与资源和RAG相关联的存储设备中标识的未决请求的优先级,将令牌授予RAG内的单元。 优先级指针用于在资源的RAG内提供高优先级请求和低优先级请求之间的循环公平性方案。

    Adaptive shared data interventions in coupled broadcast engines
    10.
    发明授权
    Adaptive shared data interventions in coupled broadcast engines 失效
    耦合广播引擎中的自适应共享数据干预

    公开(公告)号:US06986002B2

    公开(公告)日:2006-01-10

    申请号:US10322075

    申请日:2002-12-17

    Applicant: Ram Raghavan

    Inventor: Ram Raghavan

    CPC classification number: G06F12/0813 G06F12/122

    Abstract: The present invention provides for a bus system having a local bus ring coupled to a remote bus ring. A processing unit is coupled to the local bus node and is employable to request data. A cache is coupled to the processing unit through a command bus. A cache investigator, coupled to the cache, is employable to determine whether the cache contains the requested data. The cache investigator is further employable to generate and broadcast cache utilization parameters, which contain information as to the degree of accessing the cache by other caches, its own associated processing unit, and so on. In one aspect, the cache is a local cache. In another aspect, the cache is a remote cache.

    Abstract translation: 本发明提供一种具有耦合到远程总线环的局部总线环的总线系统。 处理单元耦合到本地总线节点并且可用于请求数据。 缓存通过命令总线耦合到处理单元。 耦合到高速缓存的缓存调查器可用于确定高速缓存是否包含所请求的数据。 高速缓存调查器还可用于生成和广播高速缓存利用率参数,其包含关于由其他高速缓存访​​问高速缓存的程度的信息,其自己的相关处理单元等。 在一个方面,高速缓存是本地高速缓存。 另一方面,高速缓存是远程高速缓存。

Patent Agency Ranking