摘要:
A system and method for object rundown protection that scales with the number of processors in a shared-memory computer system is disclosed. In an embodiment of the present invention, prior to object rundown, a cache-aware reference count data structure is used to prevent cache-pinging that would otherwise result from data sharing across processors in a multiprocessor computer system. In this data structure, a counter of positive references and negative dereferences, aligned on a particular cache line, is maintained for each processor. When an object is to be destroyed, a rundown wait process is begun, during which new references on the object are prohibited, and the total number of outstanding references is added to an on-stack global counter. Destruction is delayed until the global reference count is reduced to zero. In an embodiment of the invention suited to implementation on non-uniform memory access multiprocessor machines, each processor's reference count is additionally allocated in a region of main memory that is physically close to that processor.
摘要:
Object rundown protection that scales with the number of processors in a shared-memory computer system is disclosed. Prior to object rundown, a cache-aware reference count data structure is used to prevent cache-pinging that would otherwise result from data sharing across processors in a multiprocessor computer system. In this data structure, a counter of positive references and negative dereferences, aligned on a particular cache line, is maintained for each processor. When an object is to be destroyed, a rundown wait process is begun, during which new references on the object are prohibited, and the total number of outstanding references is added to an on-stack global counter. Destruction is delayed until the global reference count is reduced to zero. In an implementation on non-uniform memory access multiprocessor machines, each processor's reference count is additionally allocated in a region of main memory that is physically close to that processor.
摘要:
Object rundown protection that scales with the number of processors in a shared-memory computer system is disclosed. Prior to object rundown, a cache-aware reference count data structure is used to prevent cache-pinging that would otherwise result from data sharing across processors in a multiprocessor computer system. In this data structure, a counter of positive references and negative dereferences, aligned on a particular cache line, is maintained for each processor. When an object is to be destroyed, a rundown wait process is begun, during which new references on the object are prohibited, and the total number of outstanding references is added to an on-stack global counter. Destruction is delayed until the global reference count is reduced to zero. In an implementation on non-uniform memory access multiprocessor machines, each processor's reference count is additionally allocated in a region of main memory that is physically close to that processor.
摘要:
Described is a method and system that manages filenames for filter drivers in a file system. The present invention includes a filter manager that handles filename queries from the filter drivers. The filter manager returns a pointer to the requesting filter driver that points to a filename information structure corresponding to the type of filename requested. The filter manager also manages a cache of filename information structures that include information that can be shared among the various filter drivers, amortizing the filename queries for filter drivers. The caching functionality of the filter manager increases the efficiency and reduces the overhead of filename queries within the file system by reducing the number of filename operations required for a file system filter driver to retrieve a desired portion of the filename.
摘要:
A system and method for object rundown protection that scales with the number of processors in a shared-memory computer system is disclosed. In an embodiment of the present invention, prior to object rundown, a cache-aware reference count data structure is used to prevent cache-pinging that would otherwise result from data sharing across processors in a multiprocessor computer system. In this data structure, a counter of positive references and negative dereferences, aligned on a particular cache line, is maintained for each processor. When an object is to be destroyed, a rundown wait process is begun, during which new references on the object are prohibited, and the total number of outstanding references is added to an on-stack global counter. Destruction is delayed until the global reference count is reduced to zero. In an embodiment of the invention suited to implementation on non-uniform memory access multiprocessor machines, each processor's reference count is additionally allocated in a region of main memory that is physically close to that processor.
摘要:
File system metadata regarding states of a file system affected by transactions is tracked consistently even in the face of dirty shutdowns which might cause rollbacks in transactions which have already been reflected in the metadata. In order to only request time- and resource-heavy rebuilding of metadata for metadata which may have been affected by rollbacks, reliability information is tracked regarding metadata items. When a metadata item is affected by a transaction which may not complete properly in the case of a problematic shutdown or other event, that metadata item's reliability information indicates that it may not be reliable in case of such a problematic (“dirty” or “abnormal”) event. In addition to flag information indicating unreliability, timestamp information tracking a time of the command which has made a metadata item unreliable is also maintained. This timestamp information can then be used, along with information regarding a period after which the transaction will no longer cause a problem in the case of a problematic event, in order to reset the reliability information to indicate that the metadata item is now reliable even in the face of a problematic event.
摘要:
The invention provides a system and method for dynamically unloading file system filters in a stacked-call-back model where filters are stacked one on top of the other to form a filter stack. A filter manager keeps track of the progress of each I/O operation and calls each filter in turn with the filter returning after it has completed processing the given operation. The filter manager dynamically unloads a filter (or an instance of a filter) that is positioned at any position in the filter stack in a reasonable amount of time while I/O operations are actively being processed. The filter/filter instance can be unloaded with outstanding I/O operations on the filter either hosted by the filter or pended by other filters. I/O operations are canceled, completed or drained in order to unload the filter or filter instance. A filter may veto the unloading of the filter.
摘要:
A method for concurrent data migration includes classifying files to be migrated into plural jobs, selecting media to which to migrate each job, and using plural drives concurrently to write the jobs to the media. The selection of a medium is performed in a way that prevents the number of writeable media from exceeding the number of available drives, unless no allocated medium has sufficient space to store any files in a migration job. A medium is preferentially selected that has already been allocated for writing, has space to store at least one file in the job, is not in use for another job, and can be robotically mounted on a drive. If such a medium does not exist, then the set of available media is canvassed to locate an alternative medium. The attributes of each medium are evaluated to determine which medium can be selected most consistently with the goals of (1) preventing the number of media from exceeding the number of drives, and (2) providing sufficient media to allow plural drives to be used concurrently. The technique can be embodied in a file management environment that transparently migrates files meeting certain criteria and stores the location of the migrated file in a reparse point provided by the file system.
摘要:
A technique for recalling data objects stored on media. A queue is created for each medium on which data objects are located, where each request to recall a data object is placed on the queue corresponding to the medium on which the data object is located. A queue is “active” when its corresponding medium is mounted and being used for recall; otherwise the queue is “non-active.” A thread is created for each active queue, where the thread retrieves from a medium the requested items on the active queue. When plural drives are available for mounting and reading media, plural queues may be active concurrently, so that the plural queues' respective threads may recall items from the plural media in parallel. Preferably, the requests on each queue are organized in an order such that the offset locations of the requested items form two monotonically increasing sequences.
摘要:
A method for concurrent data migration includes classifying files to be migrated into plural jobs, selecting media to which to migrate each job, and using plural drives concurrently to write the jobs to the media. The selection of a medium is performed in a way that prevents the number of writeable media from exceeding the number of available drives, unless no allocated medium has sufficient space to store any files in a migration job. A medium is preferentially selected that has already been allocated for writing, has space to store at least one file in the job, is not in use for another job, and can be robotically mounted on a drive. If such a medium does not exist, then the set of available media is canvassed to locate an alternative medium. The attributes of each medium are evaluated to determine which medium can be selected most consistently with the goals of (1) preventing the number of media from exceeding the number of drives, and (2) providing sufficient media to allow plural drives to be used concurrently. The technique can be embodied in a file management environment that transparently migrates files meeting certain criteria and stores the location of the migrated file in a reparse point provided by the file system.