Virtual instruction cache system using length responsive decoded
instruction shifting and merging with prefetch buffer outputs to fill
instruction buffer
    2.
    发明授权
    Virtual instruction cache system using length responsive decoded instruction shifting and merging with prefetch buffer outputs to fill instruction buffer 失效
    虚拟指令高速缓存系统使用长度响应的指令转换和与预置缓冲区输出的合并来填充指令缓冲区

    公开(公告)号:US5113515A

    公开(公告)日:1992-05-12

    申请号:US306831

    申请日:1989-02-03

    IPC分类号: G06F9/30 G06F9/38

    摘要: An instruction buffer of a high speed digital computer controls the flow of instruction stream to an instruction decoder. The buffer provides the decoder with nine bytes of sequential instruction stream. The instruction set used by the computer is of the variable length type, such that the decoder consumes a variable number of the instruction stream bytes, depending upon the type of instruction being decoded. As each instruction is consumed, a shifter removes the consumed bytes and repositions the remaining bytes into the lowest order positions. The byte positions left empty by the shifter are filled by instruction stream retrieved from one of a pair of prefetch buffers (IBEX, IBEX2) or from a virtual instruction cache. These prefetch buffers are arranged to hold the next two subsequent quadwords of instruction stream and provide the desired missing bytes. The IBEX prefetch buffer is filled from the instruction cache after being emptied, but prior to those particular bytes being requested to fill the instruction decoder. This two level prefetching allows the relatively slow process of cache access to be performed during noncritical time. The instruction decoder is not stalled, waiting for a cache refill, but can ordinarily obtain the desired bytes of instruction stream from the prefetch buffer.

    System for translation of virtual to physical addresses by operating
memory management processor for calculating location of physical
address in memory concurrently with cache comparing virtual addresses
for translation
    3.
    发明授权
    System for translation of virtual to physical addresses by operating memory management processor for calculating location of physical address in memory concurrently with cache comparing virtual addresses for translation 失效
    用于通过操作存储器管理处理器来计算虚拟地址到物理地址的系统,用于计算存储器中的物理地址的位置并与高速缓存比较虚拟地址进行翻译

    公开(公告)号:US5349651A

    公开(公告)日:1994-09-20

    申请号:US746007

    申请日:1991-08-09

    IPC分类号: G06F12/08 G06F12/10 G06F12/00

    CPC分类号: G06F12/0855 G06F12/1045

    摘要: In the field of high speed computers it is common for a central processing unit to reference memory locations via a virtual addressing scheme, rather than by the actual physical memory addresses. In a multi-tasking environment, this virtual addressing scheme reduces the possibility of different programs accessing the same physical memory location. Thus, to maintain computer processing speed, a high speed translation buffer cache is employed to perform the necessary virtual-to-physical conversions for memory reference instructions. The translation buffer cache stores a number of previously translated virtual addresses and their corresponding physical addresses. A memory management processor is employed to update the translation buffer cache with the most recently accessed physical memory locations. The memory management processor consists of a state machine controlling hardware specifically designed for the purpose of updating the translation buffer cache. The memory management processor calculates an address of a location in the memory where the physical address is stored concurrently with the translation buffer cache comparing the virtual address with already stored virtual addresses. With this arrangement the memory management unit can immediately access memory to retrieve the physical address upon a "miss" by the translation buffer cache.

    摘要翻译: 在高速计算机领域,中央处理单元通常通过虚拟寻址方案而不是实际物理存储器地址来引用存储器位置。 在多任务环境中,这种虚拟寻址方案减少了访问相同物理内存位置的不同程序的可能性。 因此,为了维持计算机处理速度,使用高速转换缓冲器缓存来对存储器参考指令执行必要的虚拟到物理转换。 翻译缓冲区高速缓存存储多个先前转换的虚拟地址及其对应的物理地址。 使用存储器管理处理器来更新具有最近访问的物理存储器位置的翻译缓冲器高速缓存。 存储器管理处理器由控制硬件的状态机组成,其特别设计用于更新翻译缓冲器高速缓存。 存储器管理处理器计算存储器中与物理地址同时存储的位置的地址,该地址与翻译缓冲器高速缓存将虚拟地址与已存储的虚拟地址进行比较。 利用这种安排,存储器管理单元可以立即访问存储器,以在翻译缓冲器高速缓存器“遗漏”时检索物理地址。

    Method and apparatus using a cache and main memory for both vector
processing and scalar processing by prefetching cache blocks including
vector data elements
    5.
    发明授权
    Method and apparatus using a cache and main memory for both vector processing and scalar processing by prefetching cache blocks including vector data elements 失效
    使用高速缓存和主存储器的方法和装置,用于通过预取包括向量数据元素的高速缓存块来进行矢量处理和标量处理

    公开(公告)号:US4888679A

    公开(公告)日:1989-12-19

    申请号:US142794

    申请日:1988-01-11

    摘要: A main memory and cache suitable for scalar processing are used in connection with a vector processor by issuing prefetch requests in response to the recognition of a vector load instruction. A respective prefetch request is issued for each block containing an element of the vector to be loaded from memory. In response to a prefetch request, the cache is checked for a "miss" and if the cache does not include the required block, a refill request is sent to the main memory. The main memory is configured into a plurality of banks and has a capability of processing multiple references. Therefore the different banks can be referenced simultaneously to prefetch multiple blocks of vector data. Preferably a cache bypass is provided to transmit data directly to the vector processor as the data from the main memory are being stored in the cache. In a preferred embodiment, a vector processor is added to a digital computing system including a scalar processor, a virtual address translation buffer, a main memory and a cache. The scalar processor includes a microcode interpreter which sends a vector load command to the vector processing unit and which also generates vector prefetch requests. The addresses for the data blocks to be prefetched are computed based upon the vector address, the length of the vector and the "stride" or spacing between the addresses of the elements of the vector.

    摘要翻译: 适用于标量处理的主存储器和缓存器与矢量处理器结合使用以响应于矢量加载指令的识别发出预取请求。 为包含要从存储器加载的向量的元素的每个块发出相应的预取请求。 响应于预取请求,检查缓存是否存在“未命中”,并且如果高速缓存不包括所需的块,则向主存储器发送补充请求。 主存储器被配置成多个存储体并且具有处理多个引用的能力。 因此,可以同时引用不同的库来预取多个向量数据块。 优选地,提供高速缓存旁路以将数据直接发送到向量处理器,因为来自主存储器的数据正被存储在高速缓存中。 在优选实施例中,将向量处理器添加到包括标量处理器,虚拟地址转换缓冲器,主存储器和高速缓存的数字计算系统。 标量处理器包括微代码解释器,其向向量处理单元发送向量加载命令,并且还生成向量预取请求。 要预取的数据块的地址是基于向量地址,向量的长度和向量元素的地址之间的“stride”或间距来计算的。

    Method and apparatus for ordering and queueing multiple memory requests
    7.
    发明授权
    Method and apparatus for ordering and queueing multiple memory requests 失效
    用于订购和排队多种内存请求的方法和装置

    公开(公告)号:US5222223A

    公开(公告)日:1993-06-22

    申请号:US306870

    申请日:1989-02-03

    摘要: In a pipelined computer system 10, memory access functions (requests) are simultaneously generated from a plurality of different locations. These multiple requests are passed through a multiplexer 50 according to a prioritization scheme based upon the operational proximity of the request to the instruction currently being executed. In this manner, the complex task of converting virtual-to-physical addresses is accomplished for all memory access requests by a single translation buffer 30. The physical address output from the translation buffer 30 are passed to a cache 28 through a second multiplexer 40 according to a second prioritization scheme based upon the operational proximity of the request to the instruction currently being executed. The first and second prioritization schemes differ, in that the memory is capable of handling other requests while a higher priority "miss" is pending. Thus, the prioritization scheme temporarily suspends the higher priority request while the desired data is being retrieved from main memory 14, but continues to operate on a lower priority request so that the overall operation will be enhanced if the lower priority request hits in the cache 28.