Non-uniform memory access (NUMA) enhancements for shared logical partitions
    1.
    发明授权
    Non-uniform memory access (NUMA) enhancements for shared logical partitions 失效
    共享逻辑分区的非均匀内存访问(NUMA)增强功能

    公开(公告)号:US08490094B2

    公开(公告)日:2013-07-16

    申请号:US12394669

    申请日:2009-02-27

    IPC分类号: G06F9/50 G06F13/00

    CPC分类号: G06F9/5077 G06F2212/2542

    摘要: In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition.

    摘要翻译: 在包含多个节点和多个逻辑分区的NUMA拓扑计算机系统中,其中一些可能是专用的,其他的可以是共享的,而在共享逻辑分区中启用了NUMA优化。 这是通过在分配给逻辑分区的每个虚拟处理器中指定家庭节点参数来完成的。 当由共享逻辑分区中的操作系统创建任务时,将家庭节点分配给该任务,并且操作系统尝试将该任务分配给具有与该任务的家庭节点匹配的家庭节点的虚拟处理器 。 然后,分区管理器尝试将虚拟处理器分配给其对应的家庭节点。 如果可以这样做,可以执行NUMA优化,而不会降低共享逻辑分区的性能。

    Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions
    2.
    发明申请
    Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions 失效
    共享逻辑分区的非均匀内存访问(NUMA)增强功能

    公开(公告)号:US20100223622A1

    公开(公告)日:2010-09-02

    申请号:US12394669

    申请日:2009-02-27

    IPC分类号: G06F9/50

    CPC分类号: G06F9/5077 G06F2212/2542

    摘要: In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition.

    摘要翻译: 在包含多个节点和多个逻辑分区的NUMA拓扑计算机系统中,其中一些可能是专用的,其他的可以是共享的,而在共享逻辑分区中启用了NUMA优化。 这是通过在分配给逻辑分区的每个虚拟处理器中指定家庭节点参数来完成的。 当由共享逻辑分区中的操作系统创建任务时,将家庭节点分配给该任务,并且操作系统尝试将该任务分配给具有与该任务的家庭节点匹配的家庭节点的虚拟处理器 。 然后,分区管理器尝试将虚拟处理器分配给其对应的家庭节点。 如果可以这样做,可以执行NUMA优化,而不会降低共享逻辑分区的性能。

    Assigning cache priorities to virtual/logical processors and partitioning a cache according to such priorities
    3.
    发明授权
    Assigning cache priorities to virtual/logical processors and partitioning a cache according to such priorities 失效
    将缓存优先级分配给虚拟/逻辑处理器,并根据这些优先级对高速缓存进行分区

    公开(公告)号:US08301840B2

    公开(公告)日:2012-10-30

    申请号:US12637891

    申请日:2009-12-15

    IPC分类号: G06F12/12

    摘要: Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.

    摘要翻译: 提供了用于在具有至少一个物理处理器和至少一个相关联的高速缓冲存储器的数据处理系统中实现的机制,用于将至少一个高速缓冲存储器的高速缓存资源分配给数据处理系统的虚拟处理器。 该机制识别数据处理系统中的多个高优先级虚拟处理器。 这些机制进一步确定要分配给高优先级虚拟处理器的至少一个高速缓冲存储器的高速缓存行的百分比。 此外,机制将所述至少一个高速缓冲存储器中的高速缓存行的一部分标记为仅基于所分配给高优先级虚拟处理器的高速缓存行的确定百分比仅被高优先级的虚拟处理器驱逐。 高速缓存行的标记部分不能被优先级低于高优先级虚拟处理器的较低优先级的虚拟处理器驱逐。

    Assigning Cache Priorities to Virtual/Logical Processors and Partitioning a Cache According to Such Priorities
    4.
    发明申请
    Assigning Cache Priorities to Virtual/Logical Processors and Partitioning a Cache According to Such Priorities 失效
    根据这种优先级,将虚拟/逻辑处理器的缓存优先级分配给缓存

    公开(公告)号:US20110145505A1

    公开(公告)日:2011-06-16

    申请号:US12637891

    申请日:2009-12-15

    IPC分类号: G06F12/08 G06F12/00

    摘要: Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.

    摘要翻译: 提供了用于在具有至少一个物理处理器和至少一个相关联的高速缓冲存储器的数据处理系统中实现的机制,用于将至少一个高速缓冲存储器的高速缓存资源分配给数据处理系统的虚拟处理器。 该机制识别数据处理系统中的多个高优先级虚拟处理器。 这些机制进一步确定要分配给高优先级虚拟处理器的至少一个高速缓冲存储器的高速缓存行的百分比。 此外,机制将所述至少一个高速缓冲存储器中的高速缓存行的一部分标记为仅基于所分配给高优先级虚拟处理器的高速缓存行的确定百分比仅被高优先级的虚拟处理器驱逐。 高速缓存行的标记部分不能被优先级低于高优先级虚拟处理器的较低优先级的虚拟处理器驱逐。

    Mixed operating performance modes including a shared cache mode

    公开(公告)号:US08695011B2

    公开(公告)日:2014-04-08

    申请号:US13458769

    申请日:2012-04-27

    IPC分类号: G06F9/46 G06F1/00 G06F13/00

    CPC分类号: G06F9/5077

    摘要: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.

    Managing rollback in a transactional memory environment
    6.
    发明授权
    Managing rollback in a transactional memory environment 有权
    在事务性内存环境中管理回滚

    公开(公告)号:US08539281B2

    公开(公告)日:2013-09-17

    申请号:US13451266

    申请日:2012-04-19

    IPC分类号: G06F11/00

    CPC分类号: G06F9/528 G06F9/467

    摘要: According to one aspect of the present disclosure, a method and technique for managing rollback in a transactional memory environment is disclosed. The method includes, responsive to detecting a begin transaction directive by a processor supporting transactional memory processing, detecting an access of a first memory location not needing rollback and indicating that the first memory location does not need to be rolled back while detecting an access to a second memory location and indicating that a rollback will be required. The method also includes, responsive to detecting an end transaction directive after the begin transaction directive and a conflict requiring a rollback, omitting a rollback of the first memory location while performing rollback on the second memory location.

    摘要翻译: 根据本公开的一个方面,公开了一种用于在事务存储器环境中管理回滚的方法和技术。 该方法包括:响应于由支持事务性存储器处理的处理器检测开始事务指令,检测不需要回滚的第一存储器位置的访问,并指示第一存储器位置不需要回滚,同时检测到对 第二个内存位置,并指示需要回滚。 该方法还包括:响应于在开始事务指令之后检测到结束事务指令和需要回滚的冲突,在第二存储器位置上执行回滚的同时省略第一存储器位置的回滚。

    Method and system for managing lock contention in a computer system
    10.
    发明授权
    Method and system for managing lock contention in a computer system 有权
    在计算机系统中管理锁争用的方法和系统

    公开(公告)号:US06845504B2

    公开(公告)日:2005-01-18

    申请号:US09779369

    申请日:2001-02-08

    IPC分类号: G06F7/00 G06F9/46 G06F12/00

    CPC分类号: G06F9/526

    摘要: A system and method for efficiently managing lock contention for a central processing unit (CPU) of a computer system. The present invention uses both spinning and blocking (or undispatching) to manage threads when they are waiting to acquire a lock. In addition, the present invention intelligently determines when the program thread should spin and when the program thread should become undispatched. If it is determined that the program thread should become undispatched, the present invention provides efficient undispatching of program threads that improves throughput by reducing wait time to acquire the lock. A lock contention management system includes a dispatcher for managing the execution of threads on CPUs as well as threads that are currently ready to run but not executing because they are waiting for an available CPU, a dispatch management module that determines when a program thread should become undispatched to wait on a lock and when the program thread should spin, and low-priority execution module for undispatching the program thread. The present invention also includes a lock contention management method using the above system.

    摘要翻译: 一种用于有效地管理计算机系统的中央处理单元(CPU)的锁争用的系统和方法。 本发明使用旋转和阻塞(或未分配)来管理线程在等待获取锁定时。 此外,本发明智能地确定程序线程何时旋转以及程序线程何时变为未分配。 如果确定程序线程应该不分配,则本发明提供了通过减少获取锁定的等待时间来提高吞吐量的程序线程的有效的未分配。 锁争用管理系统包括调度器,用于管理CPU上的线程的执行以及当前准备运行但未执行的线程,因为它们正在等待可用的CPU,调度管理模块确定程序线程何时应该成为 未分配等待锁和程序线程应该旋转,而低优先级的执行模块用于对程序线程进行分配。 本发明还包括使用上述系统的锁争用管理方法。