摘要:
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. A virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access requests from the applications and system utilities, and translates the file path to virtualize the file system. In a first embodiment, the file system is partially virtualized and a user can see both the virtual file paths and the physical file paths. In second and third embodiments, the file system is completely virtualized from the point of view of the applications and system utilities. In the second embodiment, a user may start with a physical file system, and virtualize the file system by installing the virtual file system translation driver. When the driver is initially installed, all virtual file paths will be considered to translate to identically named physical file paths by default. In the third embodiment, virtual translations are automatically generated for all file paths when files and directories are created, and virtual file paths may bear limited, or no resemblance to physical file paths.
摘要:
A cache memory system for a computer. Target entries for the cache memory include a class attribute. The cache may use a different replacement algorithm for each possible class attribute value. The cache may be partitioned into sections based on class attributes. Class attributes may indicate a relative likelihood of future use. Alternatively, class attributes may be used for locking. In one embodiment, each cache section is dedicated to one corresponding class. In alternative embodiments, cache classes are ranked in a hierarchy, and target entries having higher ranked attributes may be entered into cache sections corresponding to lower ranked attributes. With each of the embodiments, entries with a low likelihood of future use or low temporal locality are less likely to flush entries from the cache that have a higher likelihood of future use.
摘要:
A method and apparatus virtualizes file access operations and other I/O operations in operating systems by performing string substitutions upon a file paths or other resource identifiers to convert the virtual destination of an I/O operation to a physical destination. In accordance with the present invention, a virtual file system translation driver is interposed between a file system driver and applications and system utilities. The virtual file system translation driver receives file access requests from the applications and system utilities, and translates the file path to virtualize the file system. In a first embodiment, the file system is partially virtualized and a user can see both the virtual file paths and the physical file paths. In second and third embodiments, the file system is completely virtualized from the point of view of the applications and system utilities. In the second embodiment, a user may start with a physical file system, and virtualize the file system by installing the virtual file system translation driver. When the driver is initially installed, all virtual file paths will be considered to translate to identically named physical file paths by default. In the third embodiment, virtual translations are automatically generated for all file paths when files and directories are created, and virtual file paths may bear limited, or no resemblance to physical file paths.
摘要:
A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.
摘要:
A first processing element can run within a first operating range. A second processing element can run within a second operating range. A third processing element can be activated if the second processing element fails or can be refrained from being run unless the first or second processing element fails.
摘要:
Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.
摘要:
An apparatus, method, and system are described. In one embodiment, the system is configured to store, in a non-volatile memory, mirroring data intended for a member of a set of mirroring drives that is in a powered-down state.
摘要:
A method of determining an estimated data throughput capacity for a computer system includes the steps of creating a first model of data throughput of a central processing subsystem in the computer system as a function of latency of a memory subsystem of the computer system; creating a second model of the latency in the memory subsystem as a function of bandwidth demand of the memory subsystem; and finding a point of intersection of the first and second models. The point of intersection corresponds to a possible operating point for said computer system.
摘要:
A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory.
摘要:
A method of performing operations in a computer system, computer system, and related method of compilation, are disclosed. In one embodiment, the method of performing includes providing compiled code having at least one thread, where each of the at least one thread includes a respective plurality of blocks and each respective block includes a respective pre-fetch component and a respective execute component. The method also includes performing a first pre-fetch component from a first block of a first thread of the at least one thread, performing a first additional component after the first pre-fetch component has been performed, and performing a first execute component from the first block of the first thread. The first execute component is performed after the first additional component has been performed, and the first additional component is from either a second thread or another block of the first thread that is not the first block.