摘要:
In an aspect, a computer-implemented method for managing active read and write data routing and placement policy overview in an application-oriented system comprising: when an application issues a write operation, a writeback system of the application-oriented system writes the data only in the Virtual Element (VE) of the cache Virtual Storage Objects (VSTO) and not on another capacity layer VSTO; when an application issues an attribute write operation or metadata write operation, a writeback system of the application oriented system executes the attribute write operation in an appropriate Meta chunk Virtual Element (VE) of the cache VSTO only and not on another capacity layer VSTO; and persistently implementing a metadata change only in the Meta chunk VE.
摘要:
In an embodiment, a mapping method of an accelerated application-oriented middleware layer is provided. The method includes, using a first mapper, determining for an input output operation whether a data storage location has been designated for storing a corresponding data in a virtual storage object, the input output operation involving the corresponding data. The method further includes, using the first mapper and at least one processor, acquiring the virtual element identification of the corresponding data. The method also includes, using the virtual element identification and the corresponding data, performing the input output operation.
摘要:
The invention provides a method and system for recovery of file system data in file servers having mirrored file system volumes. The invention makes use of a “snapshot” feature of a robust file system (the “WAFL File System”) disclosed in the Incorporated Disclosures, to rapidly determined which of two or more mirrored volumes is most up-to-date, and which file blocks of the most recent mirrored volume have been changed from each one of the mirrored file systems. In a preferred embodiment, among a plurality of mirrored volumes, the invention rapidly determines which is the most up-to-date by examining a consistency point number maintained by the WAFL File System at each mirrored volume. The invention rapidly pairwise determines what blocks are shared between that most up-to-date mirrored volume and each other mirrored volume, in response to a snapshot of the file system maintained at each mirrored volume and are stored in common pairwise between each mirrored volume and the most up-to-date mirrored volume. The invention re synchronizes only those blocks that have been changed between the common snapshot and the most up-to-date snapshot.
摘要:
When a client computer requests data from a disk or similar device at a server computer, the client exports the memory associated with an allocated read buffer by generating and storing one or more incoming MMU (IMMU) entries that map the read buffer to an assigned global address range. The remote data read request, along with the assigned global address range is communicated to the server node. At the server, the request is serviced by performing a memory import operation, in which one or more outgoing MMU (OMMU) entries are generated and stored for mapping the global address range specified in the read request to a corresponding range of local physical addresses. The mapped local physical addresses in the server are not locations in the server's memory. The server then performs a DMA operation for directly transferring the data specified in the request message from the disk to the mapped local physical addresses. The DMA operation transmits the specified data to the server's network interface, at which the mapped local physical addresses to which the data is transferred are converted into the corresponding global addresses. The specified data with the corresponding global addresses are then transmitted to the client node. The client converts the global addresses in the received specified data into the local physical addresses corresponding to the allocated receive buffer, and stores the received specified data in the allocated receive buffer.
摘要:
When a client computer requests data from a disk or similar device at a server computer, the client exports the memory associated with an allocated read buffer by generating and storing one or more incoming MMU (IMMU) entries that map the read buffer to an assigned global address range. The remote data read request, along with the assigned global address range is communicated to the server node. At the server, the request is serviced by performing a memory import operation, in which one or more outgoing MMU (OMMU) entries are generated and stored for mapping the global address range specified in the read request to a corresponding range of local physical addresses. The mapped local physical addresses in the server are not locations in the server's memory. The server then performs a DMA operation for directly transferring the data specified in the request message from the disk to the mapped local physical addresses. The DMA operation transmits the specified data to the server's network interface, at which the mapped local physical addresses to which the data is transferred are converted into the corresponding global addresses. The specified data with the corresponding global addresses are then transmitted to the client node. The client converts the global addresses in the received specified data into the local physical addresses corresponding to the allocated receive buffer, and stores the received specified data in the allocated receive buffer.
摘要:
A method for precognitive fetching, involving receiving an original request, performing pre-fetching analysis using the original request to obtain a pre-fetch request, forwarding the pre-fetch request to a storage subsystem, and receiving a response to the pre-fetch request from the storage subsystem.
摘要:
A system and method are disclosed that provides transparent, global access to devices on a computer cluster. The present system generates unique device type (dev.sub.-- t) values for all devices and corresponding links between a global file system and the dev.sub.-- t values. The file system is modified to take advantage of this framework so that, when a user requests that a particular device, identified by its logical name, be opened, an operating system kernel queries the file system to determine that device's dev.sub.-- t value and then queries the a device configuration system (DCS) for the location (node) and identification (local address) of a device with that dev.sub.-- t value. Once it has received the device's location and identification, the kernel issues an open request to the host node for the device identified by the DCS. File system components executing on the host node, which include a special file system (SpecFS), handle the open request by returning to the kernel a handle to a special file object that is associated with the desired device. The kernel then returns to the requesting user a file descriptor that is mapped to the handle, through which the user can access the device.