Busy-wait-free synchronization
    1.
    发明授权
    Busy-wait-free synchronization 失效
    忙等待同步

    公开(公告)号:US06173442B2

    公开(公告)日:2001-01-09

    申请号:US09245778

    申请日:1999-02-05

    IPC分类号: G06F945

    CPC分类号: G06F9/52 Y10S707/99938

    摘要: An object structure's header (40) allocates a two-bit synchronization-state field (42) solely to monitor data for implementing synchronization on that object. When the object is locked by a particular execution thread, or when one or more execution threads are waiting for a lock or notification on that object, its header contains a pointer to monitor resources in the form of a linked list of lock records (50, 52, 54) associated with the threads involved. The synchronization-state field (42) ordinarily contains an indication of whether such a linked list exists and, if so, whether its first member is associated with a thread that has a lock on the object. When a thread attempts to gain access to that linked list, it employs an atomic swap operation to place a special busy value in that lock-state field (42) and write its execution-environment pointer into the object's header (40). If the previous value of that field was not the special busy value, the thread uses the header's previous contents to perform its intended synchronization operation. Otherwise, it obtains that information through its own execution environment (44, 46, or 48) or that of the thread whose identifier the object header previously contained. When the thread completes its synchronization operation, it employs an atomic compare-and-swap operation to write the results into the object's header if that header still contains the thread identifier that the thread originally wrote there. Otherwise, it communicates that information to its successor thread if the thread identifier is different and thereby indicates that at least one successor is contending for access to the linked list.

    摘要翻译: 对象结构的头部(40)仅分配两位同步状态字段(42)来监视用于实现该对象上的同步的数据。 当对象被特定执行线程锁定时,或者当一个或多个执行线程等待该对象上的锁定或通知时,其头包含指向监视资源的指针,该指针以锁定记录的链表的形式(50, 52,54)与所涉及的线程相关联。 同步状态字段(42)通常包含这样的链表是否存在的指示,如果是,则其第一个成员是否与对象上具有锁定的线程相关联。 当一个线程尝试访问该链表时,它采用原子交换操作在该锁状态字段(42)中放置一个特殊的忙值,并将其执行环境指针写入对象的头(40)。 如果该字段的先前值不是特殊忙值,线程将使用头部的以前内容来执行其预期的同步操作。 否则,它通过其自己的执行环境(44,46或48)或其标识符之前包含对象标题的线程获得该信息。 当线程完成其同步操作时,如果该头仍然包含线程最初在那里写入的线程标识符,它将使用原子比较和交换操作将结果写入对象的头。 否则,如果线程标识符不同,则将该信息传递给其后续线程,从而指示至少一个后继者正在竞争访问链表。

    Demand-based memory-block splitting
    2.
    发明授权
    Demand-based memory-block splitting 有权
    基于需求的内存块分割

    公开(公告)号:US06801990B2

    公开(公告)日:2004-10-05

    申请号:US10055168

    申请日:2001-10-29

    IPC分类号: G06F1200

    摘要: A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of various sizes are allocated, and it projects from those frequencies future demand for memory blocks of those sizes. To split a relatively large block in order to meet an actual or expected request for a smaller block, it bases its selection of the larger block to be split on whether the supply of free blocks of the larger block's size is great enough to meet the expected demand for such blocks. Splitting occurs both preemptively, i.e., before a request for the result of the splitting, and reactively, i.e., in response to a previously made request for a block that will result from the splitting operation.

    摘要翻译: 计算机系统(10)实现采用数据结构(图3)的存储器分配器来维护可用于接收新数据的动态分配的存储器的库存。 它从一个或多个程序接收从动态可分配的内存“堆”分配内存的请求。 它通过执行所请求的分配并从库存中移除如此分配的存储器块来响应这些请求。 相反,它增加了支持的程序或程序请求被释放的库存记忆块。 在此过程中,它监视分配了各种大小的内存块的频率,并从那些频率预测未来对这些大小的存储块的需求。 为了分割相对较大的块以满足对较小块的实际或预期的请求,它将基于较大块的选择作为分割,以便较大块的大小的空闲块的供应是否足够大以满足预期的 对这种块的需求。 分裂先发生,即在对分割结果的请求之前,并且反应地,即响应于先前对于由分割操作产生的块的请求而发生。

    Preemptive memory-block splitting
    3.
    发明授权
    Preemptive memory-block splitting 有权
    抢占式内存块分裂

    公开(公告)号:US06842838B2

    公开(公告)日:2005-01-11

    申请号:US10103637

    申请日:2002-03-21

    IPC分类号: G06F12/00 G06F12/02 G06F13/00

    摘要: A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of various sizes are allocated, and it projects from those frequencies future-demand values for memory blocks of those sizes. It then splits larger blocks into smaller ones preemptively, i.e., before a request for the result of the splitting. To split a relatively large block preemptively in order to meet an expected request for a smaller block, it bases its selection of the larger block to be split on whether the supply of free blocks of that size is great enough to meet the expected demand for such blocks. It also splits blocks reactively, i.e., in response to a previously made request for a block that will result from the splitting operation.

    摘要翻译: 计算机系统(10)实现采用数据结构(图3)的存储器分配器来维护可用于接收新数据的动态分配的存储器的库存。 它从一个或多个程序接收从动态可分配的内存“堆”分配内存的请求。 它通过执行所请求的分配并从库存中移除如此分配的存储器块来响应这些请求。 相反,它增加了支持的程序或程序请求被释放的库存记忆块。 在此过程中,它监视分配了各种大小的存储块的频率,并从那些频率的那些频率预测这些尺寸的存储块的未来需求值。 然后,它将较大的块分成较小的块,即在对分裂结果的请求之前。 为了满足对较小块的预期请求,要先分割一个相对较大的块,它将选择较大的块来分割该大小的空闲块的供应是否足够大以满足对其的预期需求 块。 它还反应地拆分块,即响应于先前对由分割操作产生的块的请求。

    Memory-block coalescing based on run-time demand monitoring
    4.
    发明授权
    Memory-block coalescing based on run-time demand monitoring 有权
    基于运行时需求监控的内存块聚合

    公开(公告)号:US06839822B2

    公开(公告)日:2005-01-04

    申请号:US10055414

    申请日:2001-10-29

    IPC分类号: G06F12/02 G06F12/00 G06F13/00

    摘要: A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of different sizes are allocated, and it projects from those frequencies future demand for different-sized memory blocks. When it needs to coalesce multiple smaller blocks to fulfil an actual or expected request for a larger block, it bases its selection of which constituent blocks to coalesce on whether enough free blocks of a constituent block's size exist to meet the projected demand for them.

    摘要翻译: 计算机系统(10)实现采用数据结构(图3)的存储器分配器来维护可用于接收新数据的动态分配的存储器的库存。 它从一个或多个程序接收从动态可分配的内存“堆”分配内存的请求。 它通过执行所请求的分配并从库存中移除如此分配的存储器块来响应这些请求。 相反,它增加了支持的程序或程序请求被释放的库存记忆块。 在此过程中,它监视分配不同大小的存储块的频率,并从那些频率预测未来对不同大小的存储块的需求。 当需要合并多个较小的块以实现对较大块的实际或预期请求时,它将选择哪个组成块合并,以确定是否存在构成块的大小的足够的空闲块以满足对它们的预计需求。

    Method and apparatus for implementing a write barrier of a garbage
collected heap
    5.
    发明授权
    Method and apparatus for implementing a write barrier of a garbage collected heap 失效
    用于实现垃圾收集堆的写入障碍的方法和装置

    公开(公告)号:US6049810A

    公开(公告)日:2000-04-11

    申请号:US842194

    申请日:1997-04-23

    IPC分类号: G06F12/02 G06F12/00

    CPC分类号: G06F12/0276 Y10S707/99957

    摘要: Apparatus, methods, systems and computer program products are disclosed describing a data structure and associated processes that optimize garbage collection. The invention sections a card vector associated with a card marked heap into portions. Each portion can be individually write protected. A section vector contains section data structures that are used to control their respective portions. When a write-barrier executes and attempts to mark a card marker in a read-only portion of the card vector, the invention traps the mark operation, sets the portion to read-write, changes the status of the section data structure and completes the mark operation. When a garbage collection phase scans the heap during the garbage collection process, it skips over portions of the card vector associated with sections having a read-only status--thus, improving the garbage collection process.

    摘要翻译: 公开了装置,方法,系统和计算机程序产品,其描述了优化垃圾收集的数据结构和相关联的过程。 本发明将与标记为堆的卡片相关联的卡片矢量部分地区分开。 每个部分可以单独写保护。 部分向量包含用于控制其各自部分的部分数据结构。 当写屏障执行并尝试在卡矢量的只读部分中标记卡标记时,本发明捕获标记操作,将部分设置为读写,改变部分数据结构的状态并完成 标记操作。 垃圾收集阶段在垃圾回收过程中扫描堆时,它会跳过与具有只读状态的段相关联的卡矢量的部分,从而改善垃圾收集过程。

    Method and apparatus for localizing nodes in a garbage collected carded
heap
    6.
    发明授权
    Method and apparatus for localizing nodes in a garbage collected carded heap 失效
    用于本地化垃圾收集的梳理堆中的节点的方法和装置

    公开(公告)号:US6038572A

    公开(公告)日:2000-03-14

    申请号:US842070

    申请日:1997-04-23

    IPC分类号: G06F12/02 G06F17/00

    CPC分类号: G06F12/0276 Y10S707/99957

    摘要: Apparatus, methods, systems and computer program products are disclosed describing processes that optimize generational garbage collection techniques in a card-marked heap. The invention localizes nodes in an older generation that have a pointer to a newer generation. This node localization increases the density of such nodes in the cards marked as having these nodes and thus reduces the number of marked cards that need to be examined for nodes having pointers to the newer generation.

    摘要翻译: 公开了装置,方法,系统和计算机程序产品,其描述了在卡标记堆中优化生成垃圾收集技术的过程。 本发明将老一代中具有指向新一代的指针的节点定位。 该节点本地化增加了标记为具有这些节点的卡中的这些节点的密度,并且因此减少了对具有指向新一代的指针的节点需要检查的标记卡的数量。

    Method and apparatus for optimizing exact garbage collection of array
nodes in a carded heap
    7.
    发明授权
    Method and apparatus for optimizing exact garbage collection of array nodes in a carded heap 失效
    优化梳理堆中阵列节点精确垃圾收集的方法和装置

    公开(公告)号:US5903900A

    公开(公告)日:1999-05-11

    申请号:US842139

    申请日:1997-04-23

    IPC分类号: G06F12/00 G06F12/02 G06F17/30

    CPC分类号: G06F12/0276 Y10S707/99957

    摘要: Apparatus, methods, systems and computer program products are disclosed that optimize a programmed loop that stores pointer variables in an array in a card-marked heap. These methods also optimize garbage collection operations on these pointer variables. Instead of implementing a write-barrier in the body of a programmed loop, the loop is parameterized. This parameterization is associated with the pointer array stored in the heap. This parameterization specifies the first and last modified elements in the array. It further specifies the stride (which indicates how many elements are skipped to reach the next modified element of the array). The parameterization is modified by successive loops that access the array. During a garbage collection operation, the array's parameterization is used to optimize the process of locating modified elements in the array.

    摘要翻译: 公开了装置,方法,系统和计算机程序产品,其优化在卡标记堆中的阵列中存储指针变量的编程循环。 这些方法还优化了这些指针变量的垃圾收集操作。 不是在编程环路的主体中实现写屏障,而是循环参数化。 此参数化与存储在堆中的指针数组相关联。 此参数化指定数组中的第一个和最后一个修改的元素。 它进一步指定了步幅(它指示跳过多少个元素来到达数组的下一个修改元素)。 参数化由访问数组的连续循环修改。 在垃圾回收操作期间,数组的参数化用于优化定位数组中修改元素的过程。

    Method and apparatus for bag-to-set, buffering remembered set
    8.
    发明授权
    Method and apparatus for bag-to-set, buffering remembered set 有权
    袋装设计方法和装置,缓冲记忆套装

    公开(公告)号:US06829686B2

    公开(公告)日:2004-12-07

    申请号:US09779322

    申请日:2001-02-08

    IPC分类号: G06F1200

    摘要: A method for providing a remembered set involves maintaining the remembered set as a bag, identifying when an event occurs, and transforming the remembered set into a set when the event occurs. The step of transforming includes obtaining a plurality of thread local store buffers and flushing the thread local store buffers to a global store buffer.

    摘要翻译: 提供记忆集的方法涉及将记忆集合保持为袋,识别事件何时发生,以及当事件发生时将记忆集合变换为集合。 转换步骤包括获得多个线程本地存储缓冲区并将线程本地存储缓冲区刷新到全局存储缓冲器。