Flag value renaming
    132.
    发明申请
    Flag value renaming 审中-公开
    标志值重命名

    公开(公告)号:US20050071518A1

    公开(公告)日:2005-03-31

    申请号:US10677039

    申请日:2003-09-30

    Abstract: According to an embodiment of the invention, a method and apparatus for flag value renaming. An embodiment of a method comprises setting a flag for a processor via a first instruction, the first instruction being either a direct update instruction or an indirect update instruction; if the setting of the flag is by a direct update instruction, executing a succeeding second instruction that reads the flag prior to completion of the first instruction; and if the setting of the flag is by an indirect update instruction, delaying the second instruction until after completion of the first instruction.

    Abstract translation: 根据本发明的实施例,一种用于标志值重命名的方法和装置。 方法的实施例包括经由第一指令设置处理器的标志,第一指令是直接更新指令或间接更新指令; 如果所述标志的设置是通过直接更新指令,则执行在完成所述第一指令之前读取所述标志的后续第二指令; 并且如果所述标志的设置是通过间接更新指令,则延迟所述第二指令直到所述第一指令完成为止。

    System and method for employing a process identifier to minimize aliasing in a linear-addressed cache
    133.
    发明申请
    System and method for employing a process identifier to minimize aliasing in a linear-addressed cache 失效
    用于使用进程标识符来最小化线性寻址高速缓存中的混叠的系统和方法

    公开(公告)号:US20050027963A1

    公开(公告)日:2005-02-03

    申请号:US10917449

    申请日:2004-08-13

    CPC classification number: G06F12/1054 G06F12/1063 G06F12/109

    Abstract: A system and method for reducing linear address aliasing is described. In one embodiment, a portion of a linear address is combined with a process identifier, e.g., a page directory base pointer to form an adjusted-linear address. The page directory base pointer is unique to a process and combining it with a portion of the linear address produces an adjusted-linear address that provides a high probability of no aliasing. A portion of the adjusted-linear address is used to search an adjusted-linear-addressed cache memory for a data block specified by the linear address. If the data block does not reside in the adjusted-linear-addressed cache memory, then a replacement policy selects one of the cache lines in the adjusted-linear-addressed cache memory and replaces the data block of the selected cache line with a data block located at a physical address produced from translating the linear address. The tag for the cache line selected is a portion of the adjusted linear address and the physical address produced from translating the linear address.

    Abstract translation: 描述了用于减少线性地址混叠的系统和方法。 在一个实施例中,线性地址的一部分与进程标识符(例如,页目录基本指针)组合以形成调整后的线性地址。 页面目录基本指针对于进程是唯一的,并且将其与线性地址的一部分组合产生调整的线性地址,其提供没有别名的高概率。 调整后的线性地址的一部分用于搜索由线性地址指定的数据块的经调整的线性寻址高速缓冲存储器。 如果数据块不在调整后的线性寻址高速缓冲存储器中,则替换策略选择调整后的线性寻址高速缓存存储器中的一条高速缓存行,并用数据块替换所选择的高速缓存线的数据块 位于从翻译线性地址产生的物理地址。 所选择的高速缓存线的标签是调整后的线性地址的一部分和通过转换线性地址产生的物理地址。

    Method and apparatus for affinity-guided speculative helper threads in chip multiprocessors
    134.
    发明申请
    Method and apparatus for affinity-guided speculative helper threads in chip multiprocessors 有权
    芯片多处理器中亲和力引导的投机辅助线程的方法和装置

    公开(公告)号:US20050027941A1

    公开(公告)日:2005-02-03

    申请号:US10632431

    申请日:2003-07-31

    CPC classification number: G06F9/3842 G06F9/383 G06F9/3851 G06F12/0862

    Abstract: Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.

    Abstract translation: 提供了用于在芯片多处理器(CMP)中执行推测性数据预取的装置,系统和方法。 数据由在CMP的一个核心上运行的辅助线程预取,而主程序在CMP的另一个核心上同时运行。 由辅助线程预取的数据被提供给辅助核心。 对于一个实施例,由辅助线程预取的数据被推送到主核心。 它也可以也可以不被提供给辅助核心。 在将数据广播到亲和组的所有核心的过程中,可能会将预取数据推送到主核心。 对于至少另一个实施例,根据主核心的请求,从辅助核心的本地高速缓存提供由辅助线程预取的数据到主核心。

    Access control of a resource shared between components
    135.
    发明授权
    Access control of a resource shared between components 有权
    组件之间共享资源的访问控制

    公开(公告)号:US06662173B1

    公开(公告)日:2003-12-09

    申请号:US09224377

    申请日:1998-12-31

    CPC classification number: G06F12/0804 G06F12/0842 G06F12/123 Y10S707/99932

    Abstract: A resource including a plurality of elements, such as a cache memory having a plurality of addressable blocks or ways, is shared between two or more components based on the operation of an access controller. The access controller, controls which of the elements are accessed exclusively by a component and which are shared by two or more components. In one embodiment, the components include the execution of instructions in first and second threads in a multi-threaded processor environment. To prevent one thread from dominating the cache memory, a first mask value is provided for each thread. The access of the components to the cache memory is controlled by the first mask values. For example, the mask values can be selected so as to prevent a thread from accessing one or more of the ways in the cache (e.g., to evict, erase, delete, etc. a particular way in the cache). Also, the mask values can be set to allow certain of the ways in the cache to be shared between threads.

    Abstract translation: 基于访问控制器的操作,包括多个元素的资源,诸如具有多个可寻址块或方式的高速缓冲存储器,在两个或更多个组件之间共享。 访问控制器控制哪些元素由组件专门访问,并由两个或多个组件共享。 在一个实施例中,组件包括在多线程处理器环境中的第一和第二线程中执行指令。 为了防止一个线程控制高速缓冲存储器,为每个线程提供了第一个掩码值。 组件对高速缓冲存储器的访问由第一掩码值控制。 例如,可以选择屏蔽值,以便防止线程访问高速缓存中的一种或多种方式(例如,驱逐,擦除,删除等等等等)。 此外,掩码值可以设置为允许高速缓存中的某些方式在线程之间共享。

Patent Agency Ranking