摘要:
An object structure's header (40) allocates a two-bit synchronization-state field (42) solely to monitor data for implementing synchronization on that object. When the object is locked by a particular execution thread, or when one or more execution threads are waiting for a lock or notification on that object, its header contains a pointer to monitor resources in the form of a linked list of lock records (50, 52, 54) associated with the threads involved. The synchronization-state field (42) ordinarily contains an indication of whether such a linked list exists and, if so, whether its first member is associated with a thread that has a lock on the object. When a thread attempts to gain access to that linked list, it employs an atomic swap operation to place a special busy value in that lock-state field (42) and write its execution-environment pointer into the object's header (40). If the previous value of that field was not the special busy value, the thread uses the header's previous contents to perform its intended synchronization operation. Otherwise, it obtains that information through its own execution environment (44, 46, or 48) or that of the thread whose identifier the object header previously contained. When the thread completes its synchronization operation, it employs an atomic compare-and-swap operation to write the results into the object's header if that header still contains the thread identifier that the thread originally wrote there. Otherwise, it communicates that information to its successor thread if the thread identifier is different and thereby indicates that at least one successor is contending for access to the linked list.
摘要:
A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of various sizes are allocated, and it projects from those frequencies future demand for memory blocks of those sizes. To split a relatively large block in order to meet an actual or expected request for a smaller block, it bases its selection of the larger block to be split on whether the supply of free blocks of the larger block's size is great enough to meet the expected demand for such blocks. Splitting occurs both preemptively, i.e., before a request for the result of the splitting, and reactively, i.e., in response to a previously made request for a block that will result from the splitting operation.
摘要:
A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of various sizes are allocated, and it projects from those frequencies future-demand values for memory blocks of those sizes. It then splits larger blocks into smaller ones preemptively, i.e., before a request for the result of the splitting. To split a relatively large block preemptively in order to meet an expected request for a smaller block, it bases its selection of the larger block to be split on whether the supply of free blocks of that size is great enough to meet the expected demand for such blocks. It also splits blocks reactively, i.e., in response to a previously made request for a block that will result from the splitting operation.
摘要:
A computer system (10) implements a memory allocator that employs a data structure (FIG. 3) to maintain an inventory of dynamically allocated memory available to receive new data. It receives from one or more programs requests that it allocate memory from a dynamically allocable memory “heap.” It responds to such requests by performing the requested allocation and removing the thus-allocated memory block from the inventory. Conversely, it adds to the inventory memory blocks that the supported program or programs request be freed. In the process, it monitors the frequencies with which memory blocks of different sizes are allocated, and it projects from those frequencies future demand for different-sized memory blocks. When it needs to coalesce multiple smaller blocks to fulfil an actual or expected request for a larger block, it bases its selection of which constituent blocks to coalesce on whether enough free blocks of a constituent block's size exist to meet the projected demand for them.
摘要:
Apparatus, methods, systems and computer program products are disclosed describing a data structure and associated processes that optimize garbage collection. The invention sections a card vector associated with a card marked heap into portions. Each portion can be individually write protected. A section vector contains section data structures that are used to control their respective portions. When a write-barrier executes and attempts to mark a card marker in a read-only portion of the card vector, the invention traps the mark operation, sets the portion to read-write, changes the status of the section data structure and completes the mark operation. When a garbage collection phase scans the heap during the garbage collection process, it skips over portions of the card vector associated with sections having a read-only status--thus, improving the garbage collection process.
摘要:
Apparatus, methods, systems and computer program products are disclosed describing processes that optimize generational garbage collection techniques in a card-marked heap. The invention localizes nodes in an older generation that have a pointer to a newer generation. This node localization increases the density of such nodes in the cards marked as having these nodes and thus reduces the number of marked cards that need to be examined for nodes having pointers to the newer generation.
摘要:
Apparatus, methods, systems and computer program products are disclosed that optimize a programmed loop that stores pointer variables in an array in a card-marked heap. These methods also optimize garbage collection operations on these pointer variables. Instead of implementing a write-barrier in the body of a programmed loop, the loop is parameterized. This parameterization is associated with the pointer array stored in the heap. This parameterization specifies the first and last modified elements in the array. It further specifies the stride (which indicates how many elements are skipped to reach the next modified element of the array). The parameterization is modified by successive loops that access the array. During a garbage collection operation, the array's parameterization is used to optimize the process of locating modified elements in the array.
摘要:
A method for providing a remembered set involves maintaining the remembered set as a bag, identifying when an event occurs, and transforming the remembered set into a set when the event occurs. The step of transforming includes obtaining a plurality of thread local store buffers and flushing the thread local store buffers to a global store buffer.