Abstract:
A method, non-transitory computer readable medium, and storage controller computing device that establishes an application interface and a source interface to a programmable switch. A flow table of the programmable switch is updated to insert routing actions associated with the application and source interfaces.Next, when an application request received from an application is locally serviceable is determined. When the determination indicates the application request is not locally serviceable, a migration request for data associated with the application request is sent to the programmable switch from the source interface and a destination address of a source storage server is used. Additionally, a migration response to the migration request including the data from the source storage server is received from the source interface. The data is then stored locally in a destination storage server and thereby is migrated from the source storage server.
Abstract:
Example embodiments provide various techniques for securing communications within a group of entities. In one example method, a request from an entity to join the group is received and a signed, digital certificate associated with the entity is accessed. Here, the signed, digital certificate is signed with a group private key that is associated with a certification authority for the group. The signed, digital certificate is added to a group roster, and this addition is to admit the entity into the group. The group roster with the signed, digital certificate is itself signed with the group private key and distributed to the group, which includes the entity that transmitted the request. Communication to the entity is then encrypted using the signed, digital certificate included in the group roster.
Abstract:
One or more techniques and/or systems are provided for multicast transport configuration, for multicast transport, and/or for fault policy implementation. In an example, a multicast component may receive a data copy request from an application to copy data to multiple destinations. A scheduler component may create a transport schedule specifying an order with which to facilitate data copy operations across transports, such as heterogeneous transports, to the destinations. A dispatcher component may apply application specified transport modifiers to the data copy operations (e.g., a modification to a quality of service for a transport). The dispatcher component may facilitate the data copy operations and provide operation result information to a policy agent. The policy agent may provide notifications of data copy operation statuses from the operation result information and/or may implement a fault policy (e.g., a retry on a different transport) for a data copy operation that experienced a fault.
Abstract:
A first plurality of block identifiers is sorted based, at least in part, on a measure of spatial locality. A second plurality of block identifiers is sorted based, at least in part, on the measure of spatial locality. At least the first plurality of block identifiers and the second plurality of block identifiers are incrementally merged into a third plurality of block identifiers based, at least in part, on the measure of spatial locality. A block of data corresponding to metadata associated with a plurality of block identifiers of the third plurality of block identifiers is updated.
Abstract:
Methods and systems for dynamic hashing in cache sub-systems are provided. The method includes analyzing a plurality of input/output (I/O) requests for determining a pattern indicating if the I/O requests are random or sequential; and using the pattern for dynamically changing a first input to a second input for computing a hash index value by a hashing function that is used to index into a hashing data structure to look up a cache block to cache an I/O request to read or write data, where for random I/O requests, a segment size is the first input to a hashing function to compute a first hash index value and for sequential I/O requests, a stripe size is used as the second input for computing a second hash index value.
Abstract:
This disclosure uses both an administrative thread and multiple worker threads (N) to process the LUN on-lining work in parallel at both the volume level and the LUN level. When the administrative thread receives the message to start the initialization, the administrative thread assigns the work for reading the VTOC information for the LUNs in a volume to one or more worker threads and moves on to perform additional initialization tasks. N worker threads work on N volumes in parallel. These worker threads then independently send messages (e.g., asynchronous messages) to the file system layer, and once the file system layer is done loading the required buffers, the file system layer sends replies back to the administrative thread. The administrative thread then again assigns work to the worker threads to finally bring the LUNs on-line.
Abstract:
Methods and systems are provided for a clustered storage system. The method assigns a network access address to a virtual network interface card (VNIC) at a first cluster node of a clustered storage system, where a physical network interface card assigned to the network access address is managed by a second cluster node of the clustered storage system; and use the VNIC by a virtual storage server at the first cluster node to communicate on behalf of the second cluster node.
Abstract:
Systems and methods herein are operable to simultaneously mirror data to a plurality of mirror partner nodes. In embodiments, a mirror client may be unaware of the number of mirror partner nodes and/or the location of the plurality of mirror partner nodes, and issue a single mirror command requesting initiation of a mirror operation. An interconnect layer may receive the single mirror command and split the mirror command into a plurality of mirror instances, one for each mirror node partner, wherein the mirror instances may be simultaneously launched. After the plurality of mirror operations has begun, the interconnect layer may manage completion reports indicating the completion status of respective mirror operations, and send a single return to the mirror client indicating whether the mirror command succeeded.
Abstract:
Atomic write operations for storage devices are implemented by maintaining the data that would be overwritten in the cache until the write operation completes. After the write operation completes, including generating any related metadata, a checkpoint is created. After the checkpoint is created, the old data is discarded and the new data becomes the current data for the affected storage locations. If an interruption occurs prior to the creation of the checkpoint, the old data is recovered and any new is discarded. If an interruption occurs after the creation of the checkpoint, any remaining old data is discarded and the new data becomes the current data. Write logs that indicate the locations affected by in progress write operation are used in some implementations. If neither all of the new data nor all of the old data is recoverable, a predetermined pattern can be written into the affected locations.
Abstract:
System and method for remotely performing a power cycle operation for a storage shelf of a storage server using a control path independent of a data path used for processing I/O requests is provided. The storage server maintains a data structure for storing information regarding a state of a plurality of power latches that are used to control power for the storage shelf having an alternate control path module for receiving control commands via the control path. Depending on the state of the plurality of power latches, the storage server sends one or more commands to the alternate control path module to turn off power to the storage shelf during a power cycle operation. When the power shelf is powered off, the storage server waits for a certain duration and then sends one or more power on commands to the alternate control path module to power on the storage shelf.