Technique for preserving memory affinity in a non-uniform memory access data processing system
    1.
    发明申请
    Technique for preserving memory affinity in a non-uniform memory access data processing system 审中-公开
    在不均匀的存储器访问数据处理系统中保存记忆亲和性的技术

    公开(公告)号:US20120198187A1

    公开(公告)日:2012-08-02

    申请号:US13015733

    申请日:2011-01-28

    IPC分类号: G06F12/12 G06F12/08

    摘要: Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented.

    摘要翻译: 公开了一种用于在计算机系统中保存记忆亲和性的技术。 响应于对存储器相关域中的页面的存储器访问的请求,确定该请求是否由与存储器相关域相关联的处理器发起。 如果请求不是由与存储器相关性域相关联的处理器发起,则确定是否存在与存储器相关域相关联的页面迁移跟踪模块内的条目的页面ID匹配。 如果没有页面ID匹配,则在页面迁移跟踪模块中选择要更新新页面ID和新的内存关联ID的条目。 如果存在页面ID匹配,则另外确定存储器相关性ID是否与页面ID字段匹配的条目匹配。 如果没有内存关联ID匹配,则该条目将使用新的内存关联ID更新; 并且如果存在存储器相关性ID匹配,则增加该条目的访问计数器。

    HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS
    2.
    发明申请
    HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS 有权
    混合放置文件内容的混合存储子系统

    公开(公告)号:US20110153931A1

    公开(公告)日:2011-06-23

    申请号:US12644721

    申请日:2009-12-22

    IPC分类号: G06F12/08

    摘要: A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD.

    摘要翻译: 组合固态硬盘(SSD)和硬盘驱动器(HDD)技术的存储子系统提供低访问延迟和低复杂度。 为SSD保留独立的免费列表,HDD和文件系统数据块可以唯一存储在SSD或HDD上。 当对子系统进行读取访问时,如果SSD上存在数据,则返回数据,但是如果该块存在于HDD上,则迁移到SSD,并将HDD上的块返回到 硬盘免费列表。 在写访问中,如果该块存在于SSD或HDD中,则该块被覆盖,但是如果块不存在于子系统中,则该块被写入HDD。

    Technique for preserving memory affinity in a non-uniform memory access data processing system

    公开(公告)号:US10169087B2

    公开(公告)日:2019-01-01

    申请号:US13015733

    申请日:2011-01-28

    IPC分类号: G06F15/16 G06F9/50

    摘要: Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented.

    Method and apparatus for minimizing cache conflict misses
    4.
    发明授权
    Method and apparatus for minimizing cache conflict misses 有权
    用于最小化缓存冲突漏洞的方法和装置

    公开(公告)号:US08751751B2

    公开(公告)日:2014-06-10

    申请号:US13015771

    申请日:2011-01-28

    IPC分类号: G06F12/00 G06F12/08 G06F12/10

    摘要: A method for minimizing cache conflict misses is disclosed. A translation table capable of facilitating the translation of a virtual address to a real address during a cache access is provided. The translation table includes multiple entries, and each entry of the translation table includes a page number field and a hash value field. A hash value is generated from a first group of bits within a virtual address, and the hash value is stored in the hash value field of an entry within the translation table. In response to a match on the entry within the translation table during a cache access, the hash value of the matched entry is retrieved from the translation table, and the hash value is concatenated with a second group of bits within the virtual address to form a set of indexing bits to index into a cache set.

    摘要翻译: 公开了一种最小化缓存冲突漏洞的方法。 提供了一种能够有助于在高速缓存访​​问期间将虚拟地址转换为真实地址的转换表。 翻译表包括多个条目,并且翻译表的每个条目包括页码字段和散列值字段。 从虚拟地址内的第一组比特生成哈希值,并将哈希值存储在转换表内的条目的哈希值字段中。 响应于高速缓存访​​问期间在转换表内的条目的匹配,从转换表中检索匹配条目的散列值,并且将散列值与虚拟地址中的第二组位相连,以形成 一组索引位索引到高速缓存集中。

    Processor thread load balancing manager
    6.
    发明授权
    Processor thread load balancing manager 失效
    处理器线程负载平衡管理器

    公开(公告)号:US08402470B2

    公开(公告)日:2013-03-19

    申请号:US13452849

    申请日:2012-04-21

    IPC分类号: G06F9/46

    CPC分类号: G06F9/5083

    摘要: A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).

    摘要翻译: 处理器线程负载平衡管理器使用信息处理系统(IHS)的操作系统,其确定IHS执行的应用程序中数据共享线程的进程树。 负载平衡管理器为执行应用进程树的每个线程分配一个家庭处理器,并将进程树分派到家庭处理器。 负载平衡管理器确定虚拟或实际处理器组的特定偷猎处理器是否可用于执行处理器组的家庭处理器内的执行应用的线程。 如果预期的偷猎处理器的准备好或运行的队列是空的,则负载平衡管理器可以将线程或线程从家庭处理器就绪队列移动或者从潜在的偷猎处理器的就绪队列中移走。 偷猎处理器执行水印线程以向信息处理系统(IHS)提供负载平衡。

    PROCESSOR THREAD LOAD BALANCING MANAGER
    7.
    发明申请
    PROCESSOR THREAD LOAD BALANCING MANAGER 失效
    处理器螺纹负载平衡管理器

    公开(公告)号:US20120204188A1

    公开(公告)日:2012-08-09

    申请号:US13452849

    申请日:2012-04-21

    IPC分类号: G06F9/46

    CPC分类号: G06F9/5083

    摘要: A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS).

    摘要翻译: 处理器线程负载平衡管理器使用信息处理系统(IHS)的操作系统,其确定IHS执行的应用程序中数据共享线程的进程树。 负载平衡管理器为执行应用进程树的每个线程分配一个家庭处理器,并将进程树分派到家庭处理器。 负载平衡管理器确定虚拟或实际处理器组的特定偷猎处理器是否可用于执行处理器组的家庭处理器内的执行应用的线程。 如果预期的偷猎处理器的准备好或运行的队列是空的,则负载平衡管理器可以将线程或线程从家庭处理器就绪队列移动或者从潜在的偷猎处理器的就绪队列中移走。 偷猎处理器执行水印线程以向信息处理系统(IHS)提供负载平衡。

    Apparatus and method for providing pre-translated segments for page translations in segmented operating systems
    8.
    发明申请
    Apparatus and method for providing pre-translated segments for page translations in segmented operating systems 失效
    在分段操作系统中提供用于页面翻译的预翻译段的装置和方法

    公开(公告)号:US20050188176A1

    公开(公告)日:2005-08-25

    申请号:US10782676

    申请日:2004-02-19

    IPC分类号: G06F12/08 G06F12/10

    CPC分类号: G06F12/1036 G06F2212/654

    摘要: A mechanism for generating pre-translated segments for use in virtual to real address translation is provided in which segments that are determined to meet a density threshold are promoted to a pre-translated segment class. The pages of these segments are moved to a contiguous portion of memory and the segment table entry corresponding to the segment is updated to indicate the segment to be a pre-translated segment and to include the base real address for the contiguous portion of memory. In one embodiment, as each page is moved, its page table entry is updated to point to the new location of the page so that the page is still accessible during promotion of the segment to a pre-translated segment. In this way, virtual-to-real address translation may be performed by concatenating the segment base real address, the page identifier, and a byte offset into the page.

    摘要翻译: 提供了一种用于生成用于虚拟到实际地址转换的预翻译段的机制,其中被确定为满足密度阈值的段被提升为预先转换的段类。 这些段的页面被移动到存储器的连续部分,并且对应于段的段表条目被更新以指示段作为预转换段并且包括存储器的连续部分的基本实地址。 在一个实施例中,当每个页面被移动时,其页表条目被更新以指向该页面的新位置,使得该页面在该段的推广期间仍可访问到预翻译的段。 以这种方式,可以通过将段基本实地址,页标识符和字节偏移连接到页中来执行虚拟到实地址转换。

    System and method for enabling micro-partitioning in a multi-threaded processor
    9.
    发明授权
    System and method for enabling micro-partitioning in a multi-threaded processor 有权
    用于在多线程处理器中实现微分区的系统和方法

    公开(公告)号:US08146087B2

    公开(公告)日:2012-03-27

    申请号:US11972361

    申请日:2008-01-10

    IPC分类号: G06F9/46

    CPC分类号: G06F12/1036 G06F9/5061

    摘要: A system and method for allowing jobs originating from different partitions to simultaneously utilize different hardware threads on a processor by concatenating partition identifiers with virtual page identifiers within a processor's translation lookaside buffer is presented. The device includes a translation lookaside buffer that translates concatenated virtual addresses to system-wide real addresses. The device generates concatenated virtual addresses using a partition identifier, which corresponds to a job's originating partition, and a virtual page identifier, which corresponds to the executing instruction, such as an instruction address or data address. In turn, each concatenated virtual address is different, which translates in the translation lookaside buffer to a unique system-wide real address. As such, jobs originating from different partitions are able to simultaneously execute on the device and, therefore, fully utilize each of the device's hardware threads.

    摘要翻译: 提出了一种用于允许源自不同分区的作业同时利用处理器中的不同硬件线程的系统和方法,其通过将分区标识符与处理器的翻译后备缓冲器内的虚拟页面标识符相连接。 该设备包括翻译后备缓冲区,将连接的虚拟地址转换为系统范围的实际地址。 设备使用对应于作业的始发分区的分区标识符和对应于执行指令(诸如指令地址或数据地址)的虚拟页面标识符来生成级联的虚拟地址。 反过来,每个连接的虚拟地址是不同的,这将翻译后备缓冲区转换为唯一的系统范围的实际地址。 因此,源自不同分区的作业能够在设备上同时执行,并因此充分利用设备的每个硬件线程。

    Reducing memory overhead of a page table in a dynamic logical partitioning environment
    10.
    发明授权
    Reducing memory overhead of a page table in a dynamic logical partitioning environment 有权
    在动态逻辑分区环境中减少页表的内存开销

    公开(公告)号:US07783858B2

    公开(公告)日:2010-08-24

    申请号:US11625296

    申请日:2007-01-20

    IPC分类号: G06F12/10

    摘要: Mechanisms for reducing memory overhead of a page table in a dynamic logical partitioning (LPAR) environment are provided. Each LPAR, upon its creation, is allowed to declare any maximum main memory size for the LPAR as long as the aggregate maximum main memory size for all LPARs does not exceed the total amount of available main memory. A single page table is used for all of the LPARs. Thus, the only page table in the computing system is shared by all LPARs and every memory access operation from any LPAR must go through the same page table for address translation. As a result, since only one page table is utilized, and the aggregate size of the main memory apportioned to each of the LPARs is limited to the size of the main memory, the size of the page table cannot exceed the size of the main memory.

    摘要翻译: 提供了在动态逻辑分区(LPAR)环境中减少页表的内存开销的机制。 只要所有LPAR的总最大主内存大小不超过可用主内存的总量,LPAR将在创建时允许为LPAR声明任何最大主内存大小。 所有LPAR都使用单页表。 因此,计算系统中唯一的页表由所有LPAR共享,并且来自任何LPAR的每个存储器访问操作必须经过相同的页表以进行地址转换。 结果,由于仅使用一个页表,并且分配给每个LPAR的主存储器的聚合大小被限制为主存储器的大小,所以页表的大小不能超过主存储器的大小 。