Abstract:
Hardware-based transactional memory mechanisms, such as Speculative Lock Elision (SLE), may allow multiple threads to concurrently execute critical sections protected by the same lock as speculative transactions. Such transactions may abort due to contention or due to misidentification of code as a critical section. In various embodiments, speculative execution mechanisms may be augmented with software and/or hardware contention management mechanisms to reduce abort rates. Speculative execution hardware may send a hardware interrupt signal to notify software components of a speculative execution event (e.g., abort). Software components may respond by implementing concurrency-throttling mechanisms and/or by determining a mode of execution (e.g., speculative, non-speculative) for a given section and communicating that determination to the hardware speculative execution mechanisms, e.g., by writing it into a lock predictor cache. Subsequently, hardware speculative execution mechanisms may determine a preferred mode of execution for the section by reading the corresponding entry from the lock predictor cache.
Abstract:
The present disclosure describes a unique way for each of multiple processes to operate in parallel and use the same shared data without causing corruption to the shared data. For example, during a commit phase, a corresponding transaction can attempt to increment a globally accessible version information variable and store a current value of the globally accessible version information variable for updating version information associated with modified data regardless of whether an associated attempt by the corresponding transaction to modify the globally accessible version information variable was successful. As an alternative mode, a corresponding transaction can merely read and store a current value of the globally accessible version information variable without attempting to update the globally accessible version information variable before such use. In yet another application, a parallel processing environment implements a combination of both aforementioned modes depending on a self-abort rate of the transaction.
Abstract:
Systems and methods for managing divergence of best effort transactional support mechanisms in various transactional memory implementations using a portable transaction interface are described. This interface may be implemented by various combinations of best effort hardware features, including none at all. Because the features offered by this interface may be best effort, a default (e.g., software) implementation may always be possible without the need for special hardware support. Software may be written to the interface, and may be executable on a variety of platforms, taking advantage of best effort hardware features included on each one, while not depending on any particular mechanism. Multiple implementations of each operation defined by the interface may be included in one or more portable transaction interface libraries. Systems and/or application software may be written as platform-independent and/or portable, and may call functions of these libraries to implement the operations for a targeted execution environment.
Abstract:
When a thread of program execution on a computer system is executing a critical code section, i.e., a code section whose preemption could result in inconsistency, it asserts an indicator of that fact. When the system's scheduler reschedules the thread for execution, it determines whether the indicator is asserted. If the indicator is asserted, the scheduler does not cause the thread immediately to resume execution where the thread left off when it was preempted. Instead, the scheduler has the thread's signal handler execute in such a manner that the thread performs inconsistency-avoiding operations.
Abstract:
Mechanisms and techniques provide a system for composing a complex constructs for use on a graphical display of a computerized device. The system receives a selection of basic constructor objects for use in the complex object. The basic constructor objects are chosen from a set of basic constructor object types including a button object type, a dial object type, an edit object type, and a container object type. The systems also receives a selection of one or more personalities to assign to the basic constructor objects. The personalities define extensions to basic constructor object operation and define a view for the object when rendered on an interface. The system combines the personalities and the basic constructor objects to define complex constructs such as menus, a scrollbars and the like. Personalities can be modified to alter the complex construct from one operational state to another.
Abstract:
A computer system employing a plurality of concurrent threads to perform tasks that dynamically identify further similar tasks employs a double-ended queue (“deque”) to list the dynamically identified tasks. If a thread's deque runs out of tasks while other threads' deques have tasks remaining, the thread whose deque has become empty will remove one or more entries from another thread's deque and perform the tasks thereby identified. When a thread's deque becomes too full, it may allocate space for another deque, transfer entries from its existing deque, place an identifier of the existing deque into the new deque, and adopt the new deque as the one that it uses for storing and retrieving task identifiers. Alternatively, it may transfer some of the existing deque's entries into a newly allocated array and place an identifier of that array into the existing deque. The thread thereby deals with deque overflows without introducing additional synchronization requirements or restricting the deque's range of use.
Abstract:
Mechanisms and techniques operate in a computerized device to perform a memory management technique such as garbage collection. The mechanisms and techniques operate to detect, within a storage structure associated with a thread, general memory references that reference storage locations in a general memory area such as a heap. The storage structure may be a stack utilized by the thread, which may be, for example, a Java thread, during operation of the thread in the computerized device. The system maintains a reference structure containing an association to the general memory area for each detected general memory reference within the storage structure. The system then operates a memory management technique on the general memory area for locations in the general memory area other than those for which an association to the general memory area is maintained in the reference structure, thus increasing the performance of the memory management technique.
Abstract:
A code generating system generates, from code in a program, native code that is executable by a computer system. The computer system includes a memory subsystem including a heap in which objects are stored and a stack in which method variables are stored. The code generating system may be included in a just-in-time compiler used to generate native code that is executable by a computer system, from a program in Java Byte Code form, and specifically determines, in response to Java Byte Code representative of an operator for enabling instantiation of a new object, whether the object to be instantiated contains a variable to be used in processing of the received program code portion which can be promoted to a method variable, and, if so, generates native code to enable said variable to be instantiated on the stack.
Abstract:
A digital computer system comprises a precise exception handling processor and a control subsystem. The precise exception handling processor performs processing operations under control of instructions. The precise exception handling processor is constructed in accordance with a precise exception handling model, in which, if an exception condition is detected in connection with an instruction, the exception condition is processed in connection with the instruction. The precise exception handling processor further includes a pending exception indicator having a pending exception indication state and a no pending exception indication state. The control subsystem provides a series of instructions to the precise exception handling processor to facilitate emulation of at least one emulated program instruction. The emulated program instruction is constructed to be processed by a delayed exception handling processor which is constructed in accordance with a delayed exception handling model, in which if an exception is detected during processing of an instruction, the exception condition is processed in connection with a subsequent instruction. The series of instructions provided by the control subsystem in emulation of the emulated program instruction controls the precise exception handling processor to (i) determine whether the pending exception indicator is in the pending exception indication state and, if so, to invoke a routine to process the pending exception and condition the pending exception indicator to the no pending exception indication state (ii) perform processing operations in accordance with the emulated processing instruction; and (iii) if an exception condition is detected during the processing operations, to invoke an exception handler in accordance with the processor's precise exception handling model to condition the pending exception indicator to the pending exception indication state, so that the exception condition will be processed during processing operations for a subsequent emulated program instruction.
Abstract:
A processor processes a segmented to linear virtual address conversion instruction to convert segmented virtual addresses in a segmented virtual address space to a linear virtual address in a linear virtual address space. The segmented virtual address space comprises a plurality of segments each identified by a segment identifier, each segment comprising at least one page identified by a page identifier. The linear virtual address space includes a plurality of pages each identified by a page identifier. In processing the segmented to linear virtual address conversion instruction, the processor uses a plurality of segmented to linear virtual address conversion descriptors, each associated with a page in the segmented virtual address space, each segmented to linear virtual address conversion descriptor identifying the page identifier of one of the pages in the linear virtual address space. The segmented to linear virtual address conversion instruction includes a segmented virtual address identifier in the segmented virtual address space. In processing the segmented to linear virtual address conversion instruction, the processor uses the segmented virtual address identifier in the segmented to linear virtual address conversion instruction to select one of the segmented to linear virtual address conversion descriptors. After selecting a segmented to linear virtual address conversion descriptor, the processor uses the page identifier of the linear virtual address space from the selected segmented to linear virtual address conversion descriptor and the segmented virtual address identifier in the segmented to linear virtual address conversion instruction in generating a virtual address in the linear virtual address space.