MANAGING IO PATH BANDWIDTH
    111.
    发明申请

    公开(公告)号:US20220012200A1

    公开(公告)日:2022-01-13

    申请号:US16927045

    申请日:2020-07-13

    Abstract: Bandwidth consumption for IO paths between a storage system and host may be managed. It may be determined whether there is congestion on a front-end port (FEP) link. For example, the storage system may monitor for a notification from the switch in accordance with a Fibre Channel (FC) protocol. If a notification is received indicating congestion on an FEP link, the bandwidth thresholds (BWTs) for one or more IO paths between the storage system and one or more hosts that include the FEP link may be reduced. The host port BWTs may continue to be reduced until a congestion notification communication has not been received for a predetermined amount of time, in response to which the host port BWTs for one or more host port links on IO paths that include the FEP link may be increased. Similar techniques may be employed for an FEP link determined to be faulty.

    Mitigating IO processing performance impacts in automated seamless migration

    公开(公告)号:US11175828B1

    公开(公告)日:2021-11-16

    申请号:US15931849

    申请日:2020-05-14

    Abstract: An apparatus comprises a host device configured to communicate over a network with source and target storage systems. The host device, in conjunction with migration of a logical storage device from the source storage system to the target storage system, is further configured to obtain from the target storage system watermark information characterizing progress of the migration of the logical storage device from the source storage system to the target storage system, and to determine whether a given input-output operation is to be sent to the source storage system or the target storage system based at least in part on the watermark information obtained from the target storage system. The watermark information illustratively identifies a particular logical address in the logical storage device, up to and including for which corresponding data has already been copied from the source storage system to the target storage system in conjunction with the migration.

    Automated seamless migration with signature issue resolution

    公开(公告)号:US11093155B2

    公开(公告)日:2021-08-17

    申请号:US16710828

    申请日:2019-12-11

    Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory. The processing device is configured to control performance of a migration process in which a source logical storage device of a first storage system is migrated to a target logical storage device of a second storage system. In conjunction with the migration process, the processing device is further configured to update a management header of the target logical storage device to include an identifier of the target logical storage device, to store an identifier of the source logical storage device, and responsive to a read of the management header of the target logical storage device, to return the identifier of the source logical storage device in place of the identifier of the target logical storage device. Other illustrative embodiments include methods and computer program products.

    CACHE MANAGEMENT FOR SEQUENTIAL IO OPERATIONS

    公开(公告)号:US20210240621A1

    公开(公告)日:2021-08-05

    申请号:US16777129

    申请日:2020-01-30

    Abstract: A processing node of a storage system may determine that a host system is implementing a cache-slot aware, round-robin IO distribution algorithm (CA-RR). The processing node may be configured to determine when a sufficient number of sequential IOs will be received to consume a cache slot of the a processing node. If the processing node knows that the host system is implementing CA-RR, then, in response to determining the sufficient number, the processing node may send a communication informing the next processing node about the sequential cache slot hit. If the sequential IO operation(s) are read operation(s), the next processing node may prefetch at least a cache-slot worth of next consecutive data portions. If the sequential IO operation(s) are write operation(s), then the next processing node may request allocation of one or more local cache slots for the forthcoming sequential write operations.

    DATA ENCRYPTION FOR DIRECTLY CONNECTED HOST

    公开(公告)号:US20210216661A1

    公开(公告)日:2021-07-15

    申请号:US16743004

    申请日:2020-01-15

    Abstract: A storage system may assign a different encryption key to each logical storage unit (LSU) of a storage system. For each LSU, the encryption key of the LSU may be shared only with host systems authorized to access data of the LSU. In response to a read request for a data portion received from a host application executing on the host system, encryption metadata for the data portion may be accessed. If it is determined from the encryption metadata that the data portion is encrypted, the data encryption metadata for the data portion may be further analyzed to determine the encryption key for the data portion. The data may be retrieved from the storage system, for example, by performance of a direct read operation. The retrieved data may be decrypted, and the decrypted data may be returned to the requesting application.

    Host-based bandwidth control for virtual initiators

    公开(公告)号:US11032373B1

    公开(公告)日:2021-06-08

    申请号:US17068203

    申请日:2020-10-12

    Abstract: An apparatus comprises at least one processing device that is configured to control delivery of input-output operations from a host device to a storage system over selected ones of a plurality of paths through a network, wherein the paths are associated with respective initiator-target pairs, the initiators being implemented on the host device and the targets being implemented on the storage system. The at least one processing device is further configured to identify a particular one of the initiators that comprises multiple virtual initiators having respective virtual identifiers, to determine a negotiated rate of the particular initiator, to determine a negotiated rate of a corresponding one of the targets, and to limit amounts of bandwidth utilized by the multiple virtual initiators in communicating with the corresponding target based at least in part on the negotiated rate of the particular initiator and the negotiated rate of the corresponding target.

    TECHNIQUES FOR PROVIDING I/O HINTS USING I/O FLAGS

    公开(公告)号:US20210157744A1

    公开(公告)日:2021-05-27

    申请号:US16692145

    申请日:2019-11-22

    Abstract: Techniques for processing I/O operations may include: issuing, by a process of an application on a host, an I/O operation; determining, by a driver on the host, that the I/O operation is a read operation directed to a logical device used as a log to log writes performed by the application, wherein the read operation reads first data stored at one or more logical addresses of the logical device; storing, by the driver, an I/O flag in the I/O operation, wherein the I/O flag has a first flag value denoting an expected read frequency associated with the read operation; sending the I/O operation from the host to the data storage system; and performing first processing of the I/O operation on the data storage system, wherein said first processing includes using the first flag value in connection with caching the first data in a cache of the data storage system.

    Determining multiple virtual host ports on a same physical host port

    公开(公告)号:US10970233B2

    公开(公告)日:2021-04-06

    申请号:US16176428

    申请日:2018-10-31

    Abstract: Multiple virtual host ports corresponding to a same physical host port may be determined by or on behalf of a storage system, for example, in response to logging the one or more virtual host ports into the storage system. For one or more virtual host ports, it may be determined whether the virtual host port is connected to a same fabric port as another virtual host port, where a fabric port is a port of a fabric configured to connect to a virtual host port. If two virtual host ports are determined to be connected to a same fabric port, it may be concluded that the two virtual host ports correspond to (e.g., share) a same physical host port. One or more actions may be taken on a storage network based at least in part on a determination that two virtual host ports are sharing a same physical host port.

    Storage-based slow drain detection and automated resolution

    公开(公告)号:US10929316B2

    公开(公告)日:2021-02-23

    申请号:US16374182

    申请日:2019-04-03

    Abstract: Storage-based slow drain detecting and automated resolution is provided herein. A data storage system as described herein can include a memory that stores computer executable components and a processor that executes computer executable components stored in the memory. The computer executable components can include a switch query component that obtains a host transfer rate negotiated between a host device and a network switch from a host-connected port of the network switch; a comparison component that compares the host transfer rate to an array transfer rate negotiated between the network switch and a storage array; and a rate limiter component that limits a data transfer from the storage array to the host device to the host transfer rate in response to the host transfer rate being less than the array transfer rate.

    HOST CACHE COHERENCY WHEN MODIFYING DATA

    公开(公告)号:US20210037096A1

    公开(公告)日:2021-02-04

    申请号:US16530089

    申请日:2019-08-02

    Abstract: A storage system may maintain a purge counter for one or more logical storage units. When an instruction is received to perform an operation that will modify data across the one or more logical storage units, the purge counter may be incremented. One or more host systems implementing host caching may periodically poll the storage system to determine the purge counter value. When the current value of the purge counter value is different than a previously polled purge counter value recorded on a host system, the host system may purge from its host cache any entries for logical storage units associated with the purge counter. The data storage system may not execute the data modification instruction until it receives acknowledgement from all host systems caching data affected by the modification instruction that the host system has purged any host cache entries corresponding to the LSUs affected by the modification operation.

Patent Agency Ranking