Pixel engine pipeline processor data caching mechanism
    1.
    发明授权
    Pixel engine pipeline processor data caching mechanism 失效
    像素引擎管道处理器数据缓存机制

    公开(公告)号:US5761720A

    公开(公告)日:1998-06-02

    申请号:US616540

    申请日:1996-03-15

    摘要: A method and an apparatus for providing requested data to a pipeline processor. A pipeline processor in a graphics computer system is provided with a data caching mechanism which supplies requested data to one of the stages in the pipeline processor after a request from a prior stage in the pipeline processor. With the sequential nature of the pipeline processor, a prior stage which knows in advance the data which will be requested by a subsequent stage can make a memory request to the data caching mechanism. When processing reaches the subsequent stage in the pipeline processor, the displayed data caching mechanism provides the requested data to the subsequent processing stage with minimal or no lag time from memory access. In addition, the data caching mechanism includes an adaptive cache memory which is optimized to provide maximum performance based on the particular mode in which the associated pipeline processor is operating. Furthermore, the adaptive cache includes an intelligent replacement policy based on a direction in which data is being read from memory as well as the particular mode in which the associated pipeline processor is operating.

    摘要翻译: 一种用于向流水线处理器提供所请求的数据的方法和装置。 在图形计算机系统中的流水线处理器设置有数据缓存机制,该数据缓存机制在流水线处理器中的先前阶段的请求之后,将所请求的数据提供给流水线处理器中的一个级。 利用流水线处理器的顺序性质,预先知道将由后续阶段请求的数据的先前阶段可以向数据高速缓存机制提供存储器请求。 当处理到达流水线处理器的后续阶段时,显示的数据高速缓存机制将所请求的数据提供给随后的处理级,具有从存储器访问的最小或没有滞后时间。 另外,数据高速缓存机制包括一个自适应高速缓冲存储器,该自适应缓存存储器被优化以根据相关流水线处理器正在操作的特定模式来提供最大性能。 此外,自适应高速缓存包括基于从存储器读取数据的方向以及相关联的流水线处理器正在操作的特定模式的智能替换策略。

    Pixel engine data caching mechanism
    2.
    发明授权
    Pixel engine data caching mechanism 失效
    像素引擎数据缓存机制

    公开(公告)号:US6157987A

    公开(公告)日:2000-12-05

    申请号:US976748

    申请日:1997-11-24

    摘要: A method and an apparatus for providing requested data to a pipeline processor. A pipeline processor in a graphics computer system is provided with a data caching mechanism which supplies requested data to one of the stages in the pipeline processor after a request from a prior stage in the pipeline processor. With the sequential nature of the pipeline processor, a prior stage which knows in advance the data which will be requested by a subsequent stage can make a memory request to the disclosed data caching mechanism. When processing reaches the subsequent stage in the pipeline processor, the displayed data caching mechanism provides the requested data to the subsequent processing stage with minimal or no lag time from memory access. In addition, the disclosed data caching mechanism features an adaptive cache memory which is optimized to provide maximum performance based on the particular mode in which the associated pipeline processor is operating. Furthermore, the adaptive cache disclosed in the present invention features an intelligent replacement policy based on a direction in which data is being read from memory as well as the particular mode in which the associated pipeline processor is operating. Accordingly, the adaptive cache of the present invention provides maximum performance without employing a large and expensive prior art cache memory.

    摘要翻译: 一种用于向流水线处理器提供所请求的数据的方法和装置。 在图形计算机系统中的流水线处理器设置有数据缓存机制,其在流水线处理器中的先前阶段的请求之后将所请求的数据提供给流水线处理器中的一个级。 利用流水线处理器的顺序性质,预先知道将由后续阶段请求的数据的先前阶段可以对所公开的数据高速缓存机制进行存储器请求。 当处理到达流水线处理器的后续阶段时,显示的数据高速缓存机制将所请求的数据提供给随后的处理级,具有从存储器访问的最小或没有滞后时间。 此外,所公开的数据高速缓存机制具有自适应高速缓存存储器,其被优化以基于相关流水线处理器正在操作的特定模式提供最大性能。 此外,本发明中公开的自适应高速缓存具有基于从存储器读取数据的方向以及相关流水线处理器正在操作的特定模式的智能替换策略。 因此,本发明的自适应高速缓存提供了最大的性能,而不采用大而昂贵的现有技术的高速缓存存储器。