Abstract:
One embodiment of the present invention provides a system that reduces the overhead involved in executing a native code method in an application running on a virtual machine. During operation, the system selects a call to a native code method to be optimized within the virtual machine, decompiles at least part of the native code method into an intermediate representation, and obtains an intermediate representation associated with the application. Next, the system combines the intermediate representation for the native code method with the intermediate representation associated with the application running on the virtual machine to form a combined intermediate representation, and generates native code from the combined intermediate representation, wherein the native code generation process optimizes interactions between the application running on the virtual machine and the native code method. A variation on this embodiment involves optimizing callbacks by the native code method into the virtual machine.
Abstract:
One embodiment of the present invention provides a system that facilitates in-cache reference counting in a cache memory. During operation, the system receives a command to update an old cache line with a new cache line. The system then determines if the new cache line is different than the old cache line. If so, the system determines if the old cache line contains any in-cache references. If so, for each such in-cache reference, the system decrements a reference counter in a cache line containing an object which is referenced by the in-cache reference. The system also determines if the new cache line contains any in-cache references. If so, for each such in-cache reference, the system increments a reference counter in a cache line containing an object which is referenced by the in-cache reference. Note that the reference counter in a cache line indicates a count of references in the cache that refer to an object contained in the cache line.
Abstract:
One embodiment of the present invention provides a system that supports read-only objects within an object-addressed memory hierarchy. During operation, the system receives a request to access an object, wherein the request includes an object identifier for the object that is used to reference the object within the object-addressed memory hierarchy. In response to this request, the system uses the object identifier to retrieve an object table entry associated with the object. If the request is a write request, the system examines a read-only indicator within the object table entry. If this read-only indicator specifies that the object is a read-only object, the system performs a corrective action to deal with the fact that the write request is directed to a read-only object.
Abstract:
A method for managing data, including obtaining a first instruction for moving a first data item from a first source to a first destination, determining a data type of the first data item, determining a data type supported by the first destination, comparing the data type of the first data item with the data type supported by the first destination to test a validity of the first instruction, and moving the first data item from the first source to the first destination based on the validity of the first instruction.
Abstract:
One embodiment of the present invention facilitates skewing a bi-directional object layout to provide good cache behavior. During operation, the system receives a request to access an object. This request includes an object identifier and an object offset that specifies the offset of a target field within the object, wherein the object has a bi-directional layout that locates scalar fields at positive offsets and reference fields at negative offsets, so that a reference field can be immediately identified from its object offset. Next, the system determines a skew value for a cache line containing the object, wherein data within the cache line is shifted based upon the skew value, so that reference fields with small negative offsets are likely to be located in the same cache line as scalar fields with small positive offsets. Next, the system uses the skew value in accessing the object.
Abstract:
One embodiment of the present invention provides an object-addressed memory hierarchy that is able to access objects stored outside of main memory. During operation, the system receives a request to access an object, wherein the request includes an object identifier for the object that is used to reference the object within the object-addressed memory hierarchy. Next, the system uses the object identifier to retrieve an object table entry associated with the object. The system then examines a valid bit within the object table entry. If the valid bit indicates the object is located in main memory, the system uses a physical address in the object table entry to access the object in main memory. On the other hand, if the valid bit indicates that the object is not located in main memory, the system relocates the object into memory from a location outside of memory, and then accesses the object in main memory.
Abstract:
One embodiment of the present invention provides a system that facilitates avoiding collisions between cache lines containing objects and cache lines containing corresponding object table entries. During operation, the system receives an object identifier for an object, wherein the object identifier is used to address the object in an object-addressed memory hierarchy. The system then applies a mapping function to the object identifier to compute an address for a corresponding object table entry associated with the object, wherein the mapping function ensures that a cache line containing the object table entry does not collide with a cache line containing the object.
Abstract:
A method and a system for facilitating garbage collection (GC) operations in a memory-management system that supports both mark-sweep (MS) objects and reference-counted (RC) objects, wherein objects which are frequently modified are classified as MS objects, and objects which are infrequently modified are classified as RC objects. During a marking phase of a GC operation, the system identifies a set of root objects and then marks referents of the root objects. The system then recursively traverses referents of the root objects which are MS objects and while doing so, marks referents of the traversed MS objects. However, if an RC object is encountered during the traversal of an MS object, the system marks the RC object but does not recursively traverse the RC object. In doing so, the system avoids traversing a large number of RC objects which are infrequently modified.
Abstract:
A method for concurrent garbage collection and mutator execution in a computer system includes scanning a first cache line for a non-local bit. The non-local bit is associated with a root object. A done bit associated with the first cache line is set. A second cache line to find a first object that is referenced by the root object is located. A mark bit and the done bit associated with the second cache line are set. The first and second cache lines are scanned for unset done bits. If an unset done bit is detected in either the first or the second cache line, then the cache line associated with the unset done bit is rescanned to determine whether there are any modified object references.
Abstract:
A method for pre-allocating memory for object-based cache data is provided in which request for an object having an associated property parameter that defines the memory requirements for the object. In response, a table of allocation buckets is searched for a bucket having the associated property parameter that can at least meet the memory requirements for the requested object. If an object identifier (OID), having a previously allocated physical address in main memory, is identified in the table of allocation buckets then the identified OID is assigned to the object. The object is stored in the object cache with the assigned OID, and the OID is removed from the bucket. Also included is a table of allocation buckets in a computer system in which each of a plurality of buckets is capable of holding object identifiers (OIDs). Each object identifier (OID) has a pre-allocated physical memory, with buckets of the table of allocation buckets being replenished by recycling a previously used OID when the object associated with the OID is reclaimed.