摘要:
A method and controller for implementing enhanced flash backed dynamic random access memory (DRAM) management, and a design structure on which the subject controller circuit resides are provided. An input/output adapter (IOA) includes at least one super capacitor, a data store (DS) dynamic random access memory (DRAM), a flash memory, a non-volatile random access memory (NVRAM), and a flash backed DRAM controller. Responsive to an adapter reset, Data Store DRAM testing including restoring a DRAM image from Flash to DRAM and testing of DRAM is performed. Mirroring of RAID configuration data and RAID parity update footprints between the NVRAM and DRAM is performed. Save of DRAM contents to the flash memory is controllably enabled when super capacitors have been sufficiently recharged and the flash memory erased.
摘要:
A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
摘要:
A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
摘要:
A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
摘要:
A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands.
摘要:
A method and controller for implementing storage adapter performance optimization with automatic chained hardware operations eliminating firmware operations, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines and a control store configured to store a plurality of control blocks. Each control block is designed to control a hardware operation in one of the plurality of hardware engines. A plurality of the control blocks is selectively arranged in a respective predefined chain to define sequences of hardware operations. An automatic hardware structure is configured to build the respective predefined chain controlling the hardware operations for a predefined hardware function. The predefined hardware function includes buffer allocation and automatic DMA data from a host system to the controller for write operations, eliminating firmware operations.
摘要:
A method and controller for implementing storage adapter performance optimization with cache data and cache directory mirroring between dual adapters minimizing firmware operations, and a design structure on which the subject controller circuit resides are provided. One of the first controller or the second controller operates in a first initiator mode includes firmware to set up an initiator write operation building a data frame for transferring data and a respective cache line (CL) for each page index to the other controller operating in a second target mode. Respective initiator hardware engines transfers data, reading CLs from an initiator control store, and writing updated CLs to an initiator data store, and simultaneously sends data and updated CLs to the other controller. Respective target hardware engines write data and updated CLs to the target data store, eliminating firmware operations of the controller operating in the second target mode.
摘要:
A method and controller for implementing storage adapter performance optimization with cache data and cache directory mirroring between dual adapters minimizing firmware operations, and a design structure on which the subject controller circuit resides are provided. One of the first controller or the second controller operates in a first initiator mode includes firmware to set up an initiator write operation building a data frame for transferring data and a respective cache line (CL) for each page index to the other controller operating in a second target mode. Respective initiator hardware engines transfers data, reading CLs from an initiator control store, and writing updated CLs to an initiator data store, and simultaneously sends data and updated CLs to the other controller. Respective target hardware engines write data and updated CLs to the target data store, eliminating firmware operations of the controller operating in the second target mode.
摘要:
A method and controller for implementing storage adapter performance optimization with chained hardware operations and an enhanced hardware (HW) and firmware (FW) interface minimizing hardware and firmware interactions, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; and a processor. A data store is configured to store a plurality of control blocks. A global work queue includes a plurality of the control blocks selectively arranged in a predefined chain to define sequences of hardware operations. The global work queue includes a queue input coupled to the processor and the hardware engines and an output coupled to the hardware engines. The control blocks are arranged in respective engine work queues designed to control hardware operations of the respective hardware engines and respective control blocks are arranged in an event queue to provide completion results to the processor.
摘要:
A method and controller for implementing storage adapter performance optimization with a predefined chain of hardware operations configured to implement a particular performance path minimizing hardware and firmware interactions, and a design structure on which the subject controller circuit resides are provided. The controller includes a plurality of hardware engines; and a data store configured to store a plurality of control blocks selectively arranged in one of a plurality of predefined chains. Each predefined chain defines a sequence of operations. Each control block is designed to control a hardware operation in one of the plurality of hardware engines. A resource handle structure is configured to select a predefined chain based upon a particular characteristic of the system. Each predefined chain is configured to implement a particular performance path to maximize performance.