摘要:
The present invention is generally directed to a system and method for fetching data from system memory to a device in communication with the system over a PCI bus, via an I/O cache. Broadly, the present invention may be viewed as a novel way to communicate certain fetching hints; namely, hints that specify certain qualities about the data that is to be fetched from the system memory. In operation, the I/O cache may use such hints to more effectively manage the data that passes through it. As simply one example, if, based upon the hints, the controller for the I/O cache knew (or assumed) that the data being fetched was ATM data, then it would also know (based upon the nature of ATM data) that precisely a forty-eight byte data payload was to be sent to the requesting device, and the I/O cache could pre-fetch precisely this amount of data (typically one or two cache lines). In accordance with one-aspect of the invention, such a system includes an input/output (I/O) cache memory interposed between the system memory and the PCI bus, wherein the cache memory has internal memory space in the form of a plurality of data lines within the cache memory. The system further includes a plurality of registers for each PCI master that are configured to define fetching criteria. Finally, the system includes a register selector that is configured to select an active register among the plurality of registers, wherein fetching criteria for the device is specified by the active register.
摘要:
A bus arbitration circuit, having a state machine which receives a processor request signal, a request signal from each of a group of internal input/output devices, and an external device request signal. The state machine sends a processor grant signal, a grant signal to one of the internal devices, or a grant signal to the external device, as each of the devices receives control of the bus. The circuit has a signal inverter connected to the processor request signal and another signal inverter connected to the processor grant signal. A control signal controls whether or not the inverters invert the signals. When multiple arbitration circuits are cascaded, the processor request and grant signals are not inverted for the primary bus arbitration circuit, but the request and grant signals are inverted for all secondary bus arbitration circuits.
摘要:
A method includes receiving a first buffer allocation command from a first processor, the allocation command including a register address associated with a pool of buffers in a shared memory, determining whether a buffer is available in the buffer pool based upon a buffer index corresponding to a free buffer, and if a buffer is determined available allocating the buffer to the first processor.
摘要:
A system for transferring data between main memory and an input/output device in a computer system, where device driver software stores an address of a circular buffer into the device and then the device automatically transfers data to or from the buffer. The system reduces complexity within the device by always starting the circular buffer on a page boundary, and allowing the circular buffer to be only one page long. Each time the buffer address passes either zero or half the buffer size, the system interrupts the processor to allow the driver software to transfer, to a hard disk or other area of memory, the contents of the half of the buffer that was just processed. The system further reduces complexity by transferring only eight bits of data into each word of the buffer within memory, therefore avoiding the complexity of byte packing.
摘要:
A system can log data access activity to a memory array with a metadata module while the memory array is logically divided into multiple namespaces. A workload can be determined for each namespace by the metadata module and a metadata strategy can be created with the metadata module in view of the respective namespace workloads. A first metadata and second metadata may be generated for respective first and second user-generated data for storage into a first namespace of the multiple namespaces. The first metadata can be compressed with a compression level prescribed by the metadata strategy in response to a detected or predicted workload to the first namespace before the first metadata, second metadata, first user-generated data, and second user-generated data are each stored in the first namespace.
摘要:
A method includes storing a plurality of system status messages of a specified size, and transmitting the status messages as a combined status message of a size larger than said specified size to an external device. In one aspect, the system status messages may have sizes that are less than the width of a bus, and said transmitting the combined status message includes transmitting the combined status message having a width equal to a width of the bus.
摘要:
In a method and apparatus that ensures data consistency between an I/O channel and a processor, system software issues an instruction which causes the issuance of a transaction when notification of a DMA completion is received. The transaction instructs the I/O channel to enforce coherency and then responds back only after coherency has been ensured. Specifically, a DMA.sub.-- SYNC transaction is broadcast to all I/O channels in the system. Responsive thereto, each I/O channel writes back to memory any modified lines in its cache that might contain DMA data for a DMA sequence that was reported by the system as completed. The I/O channels have a reporting means to indicate when this transaction is completed, so that the DMA.sub.-- SYNC transaction does not have to complete in pipeline order. Thus, the I/O channel can issue new transactions before responding to the DMA.sub.-- SYNC transaction.