摘要:
A write-caching RAID controller includes a CPU that manages transfers of posted-write data from host computers to a volatile memory and transfers of the posted-write data from the volatile memory to a redundant array of storage devices when a main power source is supplying power to the RAID controller. A memory controller transfers the posted-write data received from the host computers to the volatile memory and transfers the posted-write data from the volatile memory for transfer to the redundant array of storage devices as managed by the CPU. The memory controller flushes the posted-write data from the volatile memory to the non-volatile memory when main power fails, during which time capacitors provide power to the memory controller, volatile memory, and non-volatile memory, but not to the CPU, in order to reduce the energy storage requirements of the capacitors. During main power provision, the CPU programs the memory controller with information needed to perform the flush operation, such as the location and size of the posted-write data in the volatile memory and various flush operation characteristics.
摘要:
A write-caching RAID controller is disclosed. The controller includes a CPU that manages transfers of posted-write data from host computers to a volatile memory and transfers of the posted-write data from the volatile memory to storage devices when a main power source is supplying power to the RAID controller. A memory controller flushes the posted-write data from the volatile memory to the non-volatile memory when main power fails, during which time capacitors provide power to the memory controller, volatile memory, and non-volatile memory, but not to the CPU, in order to reduce the energy storage requirements of the capacitors. During main power provision, the CPU programs the memory controller with information needed to perform the flush operation, such as the location and size of the posted-write data in the volatile memory and various flush operation characteristics.
摘要:
A write-caching RAID controller is disclosed. The controller includes a CPU that manages transfers of posted-write data from host computers to a volatile memory and transfers of the posted-write data from the volatile memory to storage devices when a main power source is supplying power to the RAID controller. A memory controller flushes the posted-write data from the volatile memory to the non-volatile memory when main power fails, during which time capacitors provide power to the memory controller, volatile memory, and non-volatile memory, but not to the CPU, in order to reduce the energy storage requirements of the capacitors. During main power provision, the CPU programs the memory controller with information needed to perform the flush operation, such as the location and size of the posted-write data in the volatile memory and various flush operation characteristics.
摘要:
A write-caching RAID controller includes a CPU that manages transfers of posted-write data from host computers to a volatile memory and transfers of the posted-write data from the volatile memory to a redundant array of storage devices when a main power source is supplying power to the RAID controller. A memory controller transfers the posted-write data received from the host computers to the volatile memory and transfers the posted-write data from the volatile memory for transfer to the redundant array of storage devices as managed by the CPU. The memory controller flushes the posted-write data from the volatile memory to the non-volatile memory when main power fails, during which time capacitors provide power to the memory controller, volatile memory, and non-volatile memory, but not to the CPU, in order to reduce the energy storage requirements of the capacitors. During main power provision, the CPU programs the memory controller with information needed to perform the flush operation, such as the location and size of the posted-write data in the volatile memory and various flush operation characteristics.
摘要:
A data storage system configured for efficient mirroring of data between paired redundant controllers is provided. More particularly, in response to the receipt of customer data from a host for storage, a first controller segments the received customer data into one or more frames of data. In addition, the first controller determines or associates certain metadata for each frame of customer data, and inserts that metadata in the corresponding frame. The frames, including the metadata, are provided to a secondary controller. The secondary controller stores the customer data from a received frame in memory, and stores the corresponding metadata in another location of memory that is indexed to the location where the customer data was stored. The secondary controller may also associate a count value with each frame of data in order to distinguish the most recent frame of data should frames in memory have matching metadata.
摘要:
A bus bridge on a primary RAID controller receives user write data from a host and writes the data to its write cache and also broadcasts the data over a high speed link (e.g., PCI-Express) to a secondary RAID controller's bus bridge, which writes the data to its mirroring write cache. However, before writing the data, the second bus bridge automatically invalidates the cache buffers to which the data is to be written, which alleviates the primary controller's CPU from sending a message to the secondary controller's CPU to instruct it to invalidate the cache buffers. The secondary controller CPU programs its bus bridge at boot time with the base address of its mirrored write cache to enable it to detect that the cache buffer needs invalidating in response to the broadcast write, and with the base address of its directory that includes the cache buffer valid bits.
摘要:
A multiple client memory arbitration system supporting simultaneous arbitration access to a local cache memory and a remote cache memory for mirrored write operations to both the local cache memory and the remote cache memory by one of a local arbitration device or a remote cache memory at a time. The enhanced arbitration system includes active/active failover control by a surviving one of the local arbitration device or the remote arbitration device that have participated in the mirrored write operations between the respective local cache memory and the remote cache memory.
摘要:
A direct memory access (DMA) engine has virtually all control in connection with data transfers that can involve one or both of primary and secondary controllers. The DMA engine receives a command related to a data transfer from a processor associated with the primary controller. This command causes the DMA engine to access processor memory to obtain metadata therefrom. In performing a DMA operation, the metadata enables the DMA engine to conduct data transfers between local memory and remote memory. In performing exclusive OR operations, the DMA engine is involved with conducting data transfers using local memory.
摘要:
A bus bridge on a primary RAID controller receives user write data from a host and writes the data to its write cache and also broadcasts the data over a high speed link (e.g., PCI-Express) to a secondary RAID controller's bus bridge, which writes the data to its mirroring write cache. However, before writing the data, the second bus bridge automatically invalidates the cache buffers to which the data is to be written, which alleviates the primary controller's CPU from sending a message to the secondary controller's CPU to instruct it to invalidate the cache buffers. The secondary controller CPU programs its bus bridge at boot time with the base address of its mirrored write cache to enable it to detect that the cache buffer needs invalidating in response to the broadcast write, and with the base address of its directory that includes the cache buffer valid bits.
摘要:
A storage controller configured to adopt orphaned I/O ports is disclosed. The controller includes multiple field-replaceable units (FRUs) that plug into a backplane having local buses. At least two of the FRUs have microprocessors and memory for processing I/O requests received from host computers for accessing storage devices controlled by the controller. Other of the FRUs include I/O ports for receiving the requests from the hosts and bus bridges for bridging the I/O ports to the backplane local buses in such a manner that if one of the processing FRUs fails, the surviving processing FRU detects the failure and responsively adopts the I/O ports previously serviced by the failed FRU to service the subsequently received I/O requests on the adopted I/O ports. The I/O port FRUs also include I/O ports for transferring data with the storage devices that are also adopted by the surviving processing FRU.