Method of using a target processor to execute programs of a source
architecture that uses multiple address spaces
    1.
    发明授权
    Method of using a target processor to execute programs of a source architecture that uses multiple address spaces 失效
    使用目标处理器来执行使用多个地址空间的源架构的程序的方法

    公开(公告)号:US5560013A

    公开(公告)日:1996-09-24

    申请号:US349772

    申请日:1994-12-06

    摘要: A method of utilizing large virtual addressing in a target computer to implement an instruction set translator (1ST) for dynamically translating the machine language instructions of an alien source computer into a set of functionally equivalent target computer machine language instructions, providing in the target machine, an execution environment for source machine operating systems, application subsystems, and applications. The target system provides a unique pointer table in target virtual address space that connects each source program instruction in the multiple source virtual address spaces to a target instruction translation which emulates the function of that source instruction in the target system. The target system efficiently stores the translated executable source programs by actually storing only one copy of any source program, regardless of the number of source address spaces in which the source program exists. The target system efficiently manages dynamic changes in the source machine storage, accommodating the nature of a preemptive, multitasking source operating system. The target system preserves the security and data integrity for the source programs on a par with their security and data integrity obtainable when executing in source processors (i.e. having the source architecture as their native architecture). The target computer execution maintains source-architected logical separations between programs and data executing in different source address spaces--without a need for the target system to be aware of the source virtual address spaces.

    摘要翻译: 一种在目标计算机中利用大的虚拟寻址的方法来实现用于将外来源计算机的机器语言指令动态地翻译成一组功能相当的目标计算机机器语言指令的指令集翻译器(1ST),在目标机器中提供, 源机器操作系统,应用子系统和应用程序的执行环境。 目标系统在目标虚拟地址空间中提供唯一的指针表,将多个源虚拟地址空间中的每个源程序指令连接到模拟目标系统中该源指令的功能的目标指令转换。 目标系统通过实际存储任何源程序的一个副本来有效地存储翻译的可执行源程序,而不管源程序存在的源地址空间的数量。 目标系统有效地管理源机器存储中的动态变化,适应抢占式多任务源操作系统的性质。 目标系统保持源程序的安全性和数据完整性,与在源处理器中执行(即将源架构作为其本机架构)执行时可获得的安全性和数据完整性相当。 目标计算机执行维护程序和在不同源地址空间中执行的数据之间的源架构逻辑分隔,而不需要目标系统知道源虚拟地址空间。

    Storage access authorization controls in a computer system using dynamic
translation of large addresses
    2.
    发明授权
    Storage access authorization controls in a computer system using dynamic translation of large addresses 失效
    使用大地址动态转换的计算机系统中的存储访问授权控制

    公开(公告)号:US5577231A

    公开(公告)日:1996-11-19

    申请号:US349771

    申请日:1994-12-06

    IPC分类号: G06F9/455

    CPC分类号: G06F9/45537

    摘要: A method of using the DAT mechanism in a computer processor to extend both: 1) the native storage access authorization architecture of the processor, and 2) to enable the processor to execute programs designed to operate under different storage access architectures. An executing program (called a source program) uses "source effective addresses" (source EAs) for locating its instructions and storage operands while executing on the processor (called the target processor).

    摘要翻译: 一种在计算机处理器中使用DAT机制来扩展以下两种方法:1)处理器的本地存储访问授权架构,以及2)使处理器能够执行设计为在不同存储访问架构下运行的程序。 执行程序(称为源程序)在处理器(称为目标处理器)上执行时,使用“源有效地址”(源EA)来定位其指令和存储操作数。

    PROCESSOR PERFORMANCE IMPROVEMENT FOR INSTRUCTION SEQUENCES THAT INCLUDE BARRIER INSTRUCTIONS
    4.
    发明申请
    PROCESSOR PERFORMANCE IMPROVEMENT FOR INSTRUCTION SEQUENCES THAT INCLUDE BARRIER INSTRUCTIONS 有权
    包括障碍指示的指令序列的处理器性能改进

    公开(公告)号:US20130205120A1

    公开(公告)日:2013-08-08

    申请号:US13369029

    申请日:2012-02-08

    IPC分类号: G06F9/312

    摘要: A technique for processing an instruction sequence that includes a barrier instruction, a load instruction preceding the barrier instruction, and a subsequent memory access instruction following the barrier instruction includes determining that the load instruction is resolved based upon receipt of an earliest of a good combined response for a read operation corresponding to the load instruction and data for the load instruction. The technique also includes if execution of the subsequent memory access instruction is not initiated prior to completion of the barrier instruction, initiating in response to determining the barrier instruction completed, execution of the subsequent memory access instruction. The technique further includes if execution of the subsequent memory access instruction is initiated prior to completion of the barrier instruction, discontinuing in response to determining the barrier instruction completed, tracking of the subsequent memory access instruction with respect to invalidation.

    摘要翻译: 一种用于处理指示序列的技术,该指令序列包括屏障指令,屏障指令之前的加载指令,以及跟随障碍指令之后的随后存储器访问指令,包括:基于接收到最早的良好组合响应来确定加载指令是否被解决 用于与加载指令相对应的读取操作和用于加载指令的数据。 该技术还包括如果在完成屏障指令之前没有启动后续存储器访问指令的执行,则响应于确定完成的屏障指令启动后续存储器访问指令的执行。 该技术还包括如果在完成屏障指令之前启动后续存储器访问指令的执行,则响应于确定所完成的屏障指令而中断,跟踪关于无效的后续存储器访问指令。

    Processor, data processing system and method supporting a shared global coherency state
    5.
    发明授权
    Processor, data processing system and method supporting a shared global coherency state 失效
    处理器,数据处理系统和支持共享全局一致性状态的方法

    公开(公告)号:US08495308B2

    公开(公告)日:2013-07-23

    申请号:US11539694

    申请日:2006-10-09

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0817

    摘要: A multiprocessor data processing system includes at least first and second coherency domains, where the first coherency domain includes a system memory and a cache memory. According to a method of data processing, a cache line is buffered in a data array of the cache memory and a state field in a cache directory of the cache memory is set to a coherency state to indicate that the cache line is valid in the data array, that the cache line is held in the cache memory non-exclusively, and that another cache in said second coherency domain may hold a copy of the cache line.

    摘要翻译: 多处理器数据处理系统至少包括第一和第二相干域,其中第一相干域包括系统存储器和高速缓冲存储器。 根据数据处理的方法,将高速缓存行缓冲在高速缓冲存储器的数据阵列中,高速缓冲存储器的高速缓存目录中的状态字段被设置为一致性状态,以指示高速缓存行在数据中是有效的 数组,高速缓存存储器行被非排他地保存在高速缓冲存储器中,并且所述第二相干域中的另一个高速缓冲存储器可以保存高速缓存行的副本。

    Mode-based castout destination selection
    6.
    发明授权
    Mode-based castout destination selection 失效
    基于模式的castout目的地选择

    公开(公告)号:US08312220B2

    公开(公告)日:2012-11-13

    申请号:US12420933

    申请日:2009-04-09

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0811 G06F12/12

    摘要: In response to a data request of a first of a plurality of processing units, the first processing unit selects a victim cache line to be castout from the lower level cache of the first processing unit and determines whether a mode is set. If not, the first processing unit issues on the interconnect fabric an LCO command identifying the victim cache line and indicating that a lower level cache is the intended destination. If the mode is set, the first processing unit issues a castout command with an alternative intended destination. In response to a coherence response to the LCO command indicating success of the LCO command, the first processing unit removes the victim cache line from its lower level cache, and the victim cache line is held elsewhere in the data processing system. The mode can be set to inhibit castouts to system memory, for example, for testing.

    摘要翻译: 响应于多个处理单元中的第一处理单元的数据请求,第一处理单元从第一处理单元的较低级高速缓存中选择要丢弃的牺牲高速缓存行,并且确定是否设置了模式。 如果不是,则第一处理单元在互连结构上发出识别受害者高速缓存行的LCO命令,并指示较低级别的高速缓存是预期的目的地。 如果模式被设置,则第一处理单元发出具有替代预定目的地的停顿命令。 响应于指示LCO命令成功的LCO命令的一致性响应,第一处理单元从其较低级高速缓存中去除受害者高速缓存行,并且将受害者高速缓存行保持在数据处理系统的其他地方。 该模式可以设置为抑制系统内存的丢弃,例如进行测试。

    Victim cache prefetching
    7.
    发明授权
    Victim cache prefetching 失效
    受害者缓存预取

    公开(公告)号:US08209489B2

    公开(公告)日:2012-06-26

    申请号:US12256064

    申请日:2008-10-22

    IPC分类号: G06F12/08

    摘要: A processing unit for a multiprocessor data processing system includes a processor core and a cache hierarchy coupled to the processor core to provide low latency data access. The cache hierarchy includes an upper level cache coupled to the processor core and a lower level victim cache coupled to the upper level cache. In response to a prefetch request of the processor core that misses in the upper level cache, the lower level victim cache determines whether the prefetch request misses in the directory of the lower level victim cache and, if so, allocates a state machine in the lower level victim cache that services the prefetch request by issuing the prefetch request to at least one other processing unit of the multiprocessor data processing system.

    摘要翻译: 用于多处理器数据处理系统的处理单元包括处理器核心和耦合到处理器核心的高速缓存层级以提供低延迟数据访问。 高速缓存层级包括耦合到处理器核心的高级缓存和耦合到高级缓存的较低级别的牺牲缓存。 响应于在高级缓存中丢失的处理器核心的预取请求,较低级别的受害者缓存确定预取请求是否丢失在较低级别的受害者缓存的目录中,并且如果是,则在下级缓存中分配状态机 通过向多处理器数据处理系统的至少一个其他处理单元发出预取请求来服务于预取请求。

    Fault tolerant encoding of directory states for stuck bits
    8.
    发明授权
    Fault tolerant encoding of directory states for stuck bits 有权
    卡位的目录状态的容错编码

    公开(公告)号:US08205136B2

    公开(公告)日:2012-06-19

    申请号:US12189808

    申请日:2008-08-12

    IPC分类号: G11C29/00

    CPC分类号: G11C29/832 G06F11/1064

    摘要: A method of handling a stuck bit in a directory of a cache memory, by defining multiple binary encodings to indicate a defective cache state, detecting an error in a tag stored in a member of the directory (wherein the tag at least includes an address field, a state field and an error-correction field), determining that the error is associated with a stuck bit of the directory member, and writing new state information to the directory member which is selected from one of the binary encodings based on a field location of the stuck bit within the directory member. The multiple binary encodings may include a first binary encoding when the stuck bit is in the address field, a second binary encoding when the stuck bit is in the state field, and a third binary encoding when the stuck bit is in the error-correction field. The new state information may also further be selected based on the value of the stuck bit, e.g., a state bit corresponding to the stuck bit is assigned a bit value from the new state information which matches the value of the stuck bit.

    摘要翻译: 一种通过定义多个二进制编码来指示缺陷高速缓存状态来处理高速缓冲存储器的目录中的卡住位的方法,检测存储在目录成员中的标签中的错误(其中标签至少包括地址字段 ,状态字段和纠错字段),确定错误与目录成员的卡住位相关联,并且基于字段位置将新状态信息写入从二进制编码之一中选择的目录成员 的目录成员中的卡住位。 多个二进制编码可以包括当卡住位在地址字段中时的第一二进制编码,当卡位位于状态字段时的第二二进制编码,以及当卡位位于错误校正字段中时的第三二进制编码 。 还可以基于卡住位的值进一步选择新的状态信息,例如,对应于该卡住位的状态位从与该卡位的值匹配的新状态信息中分配一位值。

    Issuing global shared memory operations via direct cache injection to a host fabric interface
    9.
    发明授权
    Issuing global shared memory operations via direct cache injection to a host fabric interface 有权
    通过直接缓存注入向主机结构接口发出全局共享内存操作

    公开(公告)号:US07966454B2

    公开(公告)日:2011-06-21

    申请号:US12024437

    申请日:2008-02-01

    IPC分类号: G06F9/318

    摘要: A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping.

    摘要翻译: 数据处理系统通过物理内存的分布式EA-to-RA映射实现跨多个节点的全局共享存储(GSM)操作。 每个节点都有一个主机结构接口(HFI),它包括分配给并行作业最多一个本地执行任务的HFI窗口。 任务执行并行作业执行,但将全局地址空间的有效地址(EA)的一部分映射到任务相应节点的本地实际存储器。 HFI窗口使用作业ID对所有传出的GSM操作(本地任务)进行标记,并嵌入EA被映射到的节点的目标节点和HFI窗口ID。 HFI窗口还能够利用归属于接收节点的本地实际存储器的有效EA来处理接收的GSM操作,同时防止在没有有效的EA到RA本地映射的情况下处理其他接收到的操作。

    MEMORY COHERENCE DIRECTORY SUPPORTING REMOTELY SOURCED REQUESTS OF NODAL SCOPE
    10.
    发明申请
    MEMORY COHERENCE DIRECTORY SUPPORTING REMOTELY SOURCED REQUESTS OF NODAL SCOPE 失效
    记忆协调指导原则支持远程请求的NODAL范围

    公开(公告)号:US20110047352A1

    公开(公告)日:2011-02-24

    申请号:US12545246

    申请日:2009-08-21

    IPC分类号: G06F15/76 G06F9/02 G06F12/08

    CPC分类号: G06F12/0817

    摘要: A data processing system includes at least a first through third processing nodes coupled by an interconnect fabric. The first processing node includes a master, a plurality of snoopers capable of participating in interconnect operations, and a node interface that receives a request of the master and transmits the request of the master to the second processing unit with a nodal scope of transmission limited to the second processing node. The second processing node includes a node interface having a directory. The node interface of the second processing node permits the request to proceed with the nodal scope of transmission if the directory does not indicate that a target memory block of the request is cached other than in the second processing node and prevents the request from succeeding if the directory indicates that the target memory block of the request is cached other than in the second processing node.

    摘要翻译: 数据处理系统至少包括通过互连结构耦合的第一至第三处理节点。 第一处理节点包括主机,能够参与互连操作的多个侦听器,以及接收主机请求的节点接口,并将主机的请求传送到第二处理单元,传送范围限于 第二处理节点。 第二处理节点包括具有目录的节点接口。 第二处理节点的节点接口允许请求继续进行节点传输范围,如果该目录没有指示该请求的目标存储器块不是在第二处理节点中被缓存,并且如果该请求成功 目录指示除第二处理节点之外的请求的目标存储块被缓存。