SNOOP FILTER DIRECTORY MECHANISM IN COHERENCY SHARED MEMORY SYSTEM
    1.
    发明申请
    SNOOP FILTER DIRECTORY MECHANISM IN COHERENCY SHARED MEMORY SYSTEM 审中-公开
    SNOOP过滤器机密共享共享存储系统

    公开(公告)号:US20070294481A1

    公开(公告)日:2007-12-20

    申请号:US11848960

    申请日:2007-08-31

    IPC分类号: G06F12/00

    摘要: Methods and apparatus that may be utilized to maintain coherency of data accessed by both a processor and a remote device are provided. Various mechanisms, such as a remote cache directory, castout buffer, and/or outstanding transaction buffer may be utilized by the remote device to track the state of processor cache lines that may hold data targeted by requests initiated by the remote device. Based on the content of these mechanisms, requests targeting data that is not in the processor cache may be routed directly to memory, thus reducing overall latency.

    摘要翻译: 提供了可用于维护由处理器和远程设备访问的数据的一致性的方法和装置。 远程设备可以利用各种机制,诸如远程高速缓存目录,转储缓冲区和/或未完成的事务缓冲器来跟踪可以保存由远程设备发起的请求所针对的数据的处理器高速缓存行的状态。 基于这些机制的内容,针对不在处理器高速缓存中的数据的请求可以直接路由到存储器,从而减少总体延迟。

    Snoop filter directory mechanism in coherency shared memory system
    3.
    发明申请
    Snoop filter directory mechanism in coherency shared memory system 失效
    Snoop过滤器目录机制中的一致性共享内存系统

    公开(公告)号:US20060080508A1

    公开(公告)日:2006-04-13

    申请号:US10961749

    申请日:2004-10-08

    IPC分类号: G06F12/00

    摘要: Methods and apparatus that may be utilized to maintain coherency of data accessed by both a processor and a remote device are provided. Various mechanisms, such as a remote cache directory, castout buffer, and/or outstanding transaction buffer may be utilized by the remote device to track the state of processor cache lines that may hold data targeted by requests initiated by the remote device. Based on the content of these mechanisms, requests targeting data that is not in the processor cache may be routed directly to memory, thus reducing overall latency.

    摘要翻译: 提供了可用于维护由处理器和远程设备访问的数据的一致性的方法和装置。 远程设备可以利用各种机制,诸如远程高速缓存目录,转储缓冲区和/或未完成的事务缓冲器来跟踪可以保存由远程设备发起的请求所针对的数据的处理器高速缓存行的状态。 基于这些机制的内容,针对不在处理器高速缓存中的数据的请求可以直接路由到存储器,从而减少总体延迟。

    Enhanced bus transactions for efficient support of a remote cache directory copy
    4.
    发明申请
    Enhanced bus transactions for efficient support of a remote cache directory copy 审中-公开
    增强总线事务,有效支持远程缓存目录副本

    公开(公告)号:US20060080511A1

    公开(公告)日:2006-04-13

    申请号:US10961742

    申请日:2004-10-08

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0828 G06F12/0833

    摘要: Methods and apparatus are provided that may be utilized to maintain a copy of a processor cache directory on a remote device that may access data residing in a cache of the processor. Enhanced bus transactions containing cache coherency information used to maintain the remote cache directory may be automatically generated when the processor allocates or de-allocates cache lines. Rather than query the processor cache directory prior to each memory access to determine if the processor cache contains an updated copy of requested data, the remote device may query its remote copy.

    摘要翻译: 提供了可用于维护远程设备上可处理驻留在处理器的高速缓存中的数据的处理器高速缓存目录的副本的方法和装置。 当处理器分配或取消分配高速缓存行时,可能会自动生成包含用于维护远程缓存目录的缓存一致性信息的增强总线事务。 而不是在每个存储器访问之前查询处理器缓存目录,以确定处理器高速缓存是否包含所请求数据的更新副本,远程设备可以查询其远程副本。

    System and method for parallel execution of data generation tasks
    5.
    发明申请
    System and method for parallel execution of data generation tasks 审中-公开
    并行执行数据生成任务的系统和方法

    公开(公告)号:US20060095672A1

    公开(公告)日:2006-05-04

    申请号:US11065343

    申请日:2005-02-25

    IPC分类号: G06F12/14

    CPC分类号: G06F15/7846

    摘要: A CPU module includes a host element configured to perform a high-level host-related task, and one or more data-generating processing elements configured to perform a data-generating task associated with the high-level host-related task. Each data-generating processing element includes logic configured to receive input data, and logic configured to process the input data to produce output data. The amount of output data is greater than an amount of input data, and the ratio of the amount of input data to the amount of output data defines a decompression ratio. In one implementation, the high-level host-related task performed by the host element pertains to a high-level graphics processing task, and the data-generating task pertains to the generation of geometry data (such as triangle vertices) for use within the high-level graphics processing task. The CPU module can transfer the output data to a GPU module via at least one locked set of a cache memory. The GPU retrieves the output data from the locked set, and periodically forwards a tail pointer to a cacheable location within the data-generating elements that informs the data-generating elements of its progress in retrieving the output data

    摘要翻译: CPU模块包括被配置为执行高级主机相关任务的主机元件,以及被配置为执行与高级主机相关任务相关联的数据生成任务的一个或多个数据生成处理元件。 每个数据生成处理元件包括被配置为接收输入数据的逻辑和被配置为处理输入数据以产生输出数据的逻辑。 输出数据量大于输入数据量,并且输入数据量与输出数据量的比率定义了解压比。 在一个实现中,由主机元件执行的与主机相关的高级别任务涉及高级图形处理任务,并且数据生成任务涉及生成几何数据(例如三角形顶点),用于在 高级图形处理任务。 CPU模块可以经由至少一个锁定的高速缓存存储器将输出数据传送到GPU模块。 GPU从锁定的集合中检索输出数据,并周期性地将尾部指针转发到数据生成元素内的可缓存位置,以向数据生成元素通知其在检索输出数据中的进度

    Direct access of cache lock set data without backing memory
    7.
    发明申请
    Direct access of cache lock set data without backing memory 失效
    直接访问缓存锁集数据而无需备份内存

    公开(公告)号:US20060080398A1

    公开(公告)日:2006-04-13

    申请号:US10961752

    申请日:2004-10-08

    IPC分类号: G06F15/167 G06F12/00

    摘要: Methods, apparatus, and systems for quickly accessing data residing in a cache of one processor, by another processor, while avoiding lengthy accesses to main memory are provided. A portion of the cache may be placed in a lock set mode by the processor in which it resides. While in the lock set mode, this portion of the cache may be accessed directly by another processor without lengthy “backing” writes of the accessed data to main memory.

    摘要翻译: 提供了用于快速访问驻留在一个处理器的高速缓存中的另一处理器的方法,装置和系统,同时避免对主存储器的长时间访问。 高速缓存的一部分可以由其驻留的处理器置于锁定模式中。 在锁定模式下,高速缓存的这一部分可能被另一个处理器直接访问,而不会对所访问的数据进行长时间的“后退”写入主存储器。