摘要:
A multiport memory architecture, systems including the same and methods for using the same. The architecture generally includes (a) a memory array; (b) a plurality of ports configured to receive and/or transmit data; and (c) a plurality of port buffers, each of which is configured to transmit the data to and/or receive the data from one or more of the ports, and all of which are configured to (i) transmit the data to the memory array on a first common bus and (ii) receive the data from the memory array on a second common bus. The systems generally include those that embody one or more of the inventive concepts disclosed herein. The methods generally relate to writing blocks of data to, reading blocks of data from, and/or transferring blocks of data across a memory. The present invention advantageously reduces latency in data communications, particularly in network switches, by tightly coupling port buffers to the main memory and advantageously using point-to-point communications over long segments of the memory read and write paths, thereby reducing routing congestion and enabling the elimination of a FIFO. The invention advantageously shrinks chip size and provides increased data transmission rates and throughput, and in preferred embodiments, reduced resistance and/or capacitance in the memory read and write busses.
摘要:
A multiport memory architecture, systems including the same and methods for using the same. The architecture generally includes (a) a memory array; (b) a plurality of ports configured to receive and/or transmit data; and (c) a plurality of port buffers, each of which is configured to transmit the data to and/or receive the data from one or more of the ports, and all of which are configured to (i) transmit the data to the memory array on a first common bus and (ii) receive the data from the memory array on a second common bus. The systems generally include those that embody one or more of the inventive concepts disclosed herein. The methods generally relate to writing blocks of data to, reading blocks of data from, and/or transferring blocks of data across a memory. The present invention advantageously reduces latency in data communications, particularly in network switches, by tightly coupling port buffers to the main memory and advantageously using point-to-point communications over long segments of the memory read and write paths, thereby reducing routing congestion and enabling the elimination of a FIFO. The invention advantageously shrinks chip size and provides increased data transmission rates and throughput, and in preferred embodiments, reduced resistance and/or capacitance in the memory read and write busses.