发明授权
US06202139B1 Pipelined data cache with multiple ports and processor with load/store unit selecting only load or store operations for concurrent processing
失效
具有多个端口和处理器的流水线数据高速缓存,加载/存储单元仅选择用于并发处理的加载或存储操作
- 专利标题: Pipelined data cache with multiple ports and processor with load/store unit selecting only load or store operations for concurrent processing
- 专利标题(中): 具有多个端口和处理器的流水线数据高速缓存,加载/存储单元仅选择用于并发处理的加载或存储操作
-
申请号: US09100291申请日: 1998-06-19
-
公开(公告)号: US06202139B1公开(公告)日: 2001-03-13
- 发明人: David B. Witt , James K. Pickett
- 申请人: David B. Witt , James K. Pickett
- 主分类号: G06F1300
- IPC分类号: G06F1300
摘要:
A computer system includes a processor having a cache which includes multiple ports, although a storage array included within the cache may employ fewer physical ports than the cache supports. The cache is pipelined and operates at a clock frequency higher than that employed by the remainder of a microprocessor including the cache. In one embodiment, the cache preferably operates at a clock frequency which is at least a multiple of the clock frequency at which the remainder of the microprocessor operates. The multiple is equal to the number of ports provided on the cache (or the ratio of the number of ports provided on the cache to the number of ports provided internally, if more than one port is supported internally). Accordingly, the accesses provided on each port of the cache during a clock cycle of the microprocessor clock can be sequenced into the cache pipeline prior to commencement of the subsequent clock cycle. In one particular embodiment, the load/store unit of the microprocessor is configured to select only load memory operations or only store memory operations for concurrent presentation to the data cache. Accordingly, the data cache may be performing only reads or only writes to its internal array during a clock cycle. The data cache may implement several techniques for accelerating access time based upon this feature. For example, the bit lines within the data cache array may be only balanced between accesses instead of precharging (and potentially balancing).
信息查询