Efficient context saving and restoring in a multi-tasking computing
system environment
    31.
    发明授权
    Efficient context saving and restoring in a multi-tasking computing system environment 失效
    在多任务计算系统环境中高效的上下文保存和恢复

    公开(公告)号:US06061711A

    公开(公告)日:2000-05-09

    申请号:US699280

    申请日:1996-08-19

    摘要: In a multi-tasking computing system environment, one program is halted and context switched out so that a processor may context switch in a subsequent program for execution. Processor state information exists which reflects the state of the program being context switched out. Storage of this processor state information permits successful resumption of the context switched out program. When the context switched out program is subsequently context switched in, the stored processor information is loaded in preparation for successfully resuming the program at the point in which execution was previously halted. Although, large areas of memory can be allocated to processor state information storage, only a portion of this may need to be preserved across a context switch for successfully saving and resuming the context switched out program. Unnecessarily saving and loading all available processor state information can be noticeably inefficient particularly where relatively large amounts of processor state information exists. In one embodiment, a processor requests a co-processor to context switch out the currently executing program. At a predetermined appropriate point in the executing program, the co-processor responds by halting program execution and saving only the minimal amount of processor state information necessary for successful restoration of the program. The appropriate point is chosen by the application programmer at a location in the executing program that requires preserving a minimal portion of the processor information across a context switch. By saving only a minimal amount of processor information, processor time savings are accumulated across context save and restoration operations.

    摘要翻译: 在多任务计算系统环境中,停止一个程序并上下文切换,使得处理器可以在后续程序中上下文切换以执行。 存在反映正在上下文切换的程序的状态的处理器状态信息。 该处理器状态信息的存储允许成功恢复上下文切换程序。 当上下文切换程序随后进行上下文切换时,加载所存储的处理器信息以准备好在先前停止执行的点成功恢复程序。 尽管可以将大面积的存储器分配给处理器状态信息存储,但是只有一部分可能需要在上下文切换中被保留以成功地保存和恢复上下文切换程序。 不必要地保存和加载所有可用的处理器状态信息,特别是在存在相对大量的处理器状态信息的情况下是显着的。 在一个实施例中,处理器请求协处理器上下文切换当前执行的程序。 在执行程序中的预定的适当点处,协处理器通过停止程序执行并且仅节省成功恢复程序所需的最小量的处理器状态信息来进行响应。 应用程序员在执行程序中需要在上下文切换中保留处理器信息的最小部分的位置来选择适当的点。 通过仅节省最少量的处理器信息,可以在上下文保存和恢复操作中累积处理器时间节省。

    Scalable width vector processor architecture for efficient emulation
    32.
    发明授权
    Scalable width vector processor architecture for efficient emulation 失效
    可扩展宽度向量处理器架构,实现高效仿真

    公开(公告)号:US5991531A

    公开(公告)日:1999-11-23

    申请号:US804765

    申请日:1997-02-24

    摘要: A N-byte vector processor is provided which can emulate 2N-byte processor operations by executing two N-byte operations sequentially. By using N-byte architecture to process 2N-byte wide data, chip size and costs are reduced. One embodiment allows 64-byte operations to be implemented with a 32-byte vector processor by executing a 32-byte instruction on the first 32-bytes of data and then executing a 32-byte instruction on the second 32-bytes of data. Registers and instructions for 64-byte operation are emulated using two 32-byte registers and instructions, respectively, with some instructions requiring modification to accommodate 64-byte operations between adjacent elements, operations requiring specific element locations, operations shifting elements in and out of registers, and operations specifying addresses exceeding 32 bytes.

    摘要翻译: 提供一个N字节向量处理器,可以通过依次执行两个N字节操作来模拟2N字节的处理器操作。 通过使用N字节架构处理2N字节的宽数据,芯片尺寸和成本降低。 一个实施例允许通过在前32个字节的数据上执行32字节指令,然后在第二个32字节数据上执行32字节指令,通过32字节向量处理器实现64字节操作。 64字节操作的寄存器和指令分别使用两个32字节寄存器和指令进行仿真,其中一些指令需要修改以适应相邻元件之间的64字节操作,需要特定元件位置的操作,将元件输入和输出寄存器 ,以及指定地址超过32个字节的操作。

    Deferred store data read with simple anti-dependency pipeline inter-lock
control in superscalar processor
    33.
    发明授权
    Deferred store data read with simple anti-dependency pipeline inter-lock control in superscalar processor 失效
    在超标量处理器中使用简单的反依赖管道互锁控制读取延迟存储数据

    公开(公告)号:US5881307A

    公开(公告)日:1999-03-09

    申请号:US805389

    申请日:1997-02-24

    IPC分类号: G06F9/38 G06F9/40 G06F9/30

    摘要: A superscalar processor includes an execution unit that executes load/store instructions and an execution unit that executes arithmetic instruction. Execution pipelines for both execution units include a decode stage, a read stage that identify and read source operands for the instructions and an execution stage or stages performed in the execution units. For store instructions, reading store data from a register file is deferred until the store data is required for transfer to a memory system. This allows the store instructions to be decoded simultaneously with earlier instructions that generate the store data. A simple antidependency interlock uses a list of the register numbers identifying registers holding store data for pending store instructions. These register number are compared to the register numbers of destination operands of instructions, and instructions having destination operands matching a source of store data are stalled in the read stage to prevent the instruction from destroying store data before an earlier store instruction is complete.

    摘要翻译: 超标量处理器包括执行加载/存储指令的执行单元和执行算术指令的执行单元。 两个执行单元的执行流水线包括解码阶段,用于识别和读取指令的源操作数的读取阶段以及在执行单元中执行的执行阶段或阶段。 对于存储指令,从寄存器文件读取存储数据将被延迟,直到存储数据需要传输到存储器系统为止。 这允许存储指令与生成存储数据的先前指令同时解码。 一个简单的反依赖联锁使用寄存器编号列表来识别用于挂起存储指令的存储数据的寄存器。 将这些寄存器编号与指令的目标操作数的寄存器编号进行比较,并且具有与存储数据源匹配的目标操作数的指令在读取阶段停止,以防止指令在较早的存储指令完成之前破坏存储数据。

    Method and system for efficiently mapping guest instruction in an
emulation assist unit
    34.
    发明授权
    Method and system for efficiently mapping guest instruction in an emulation assist unit 失效
    用于在仿真辅助单元中有效地映射访客指令的方法和系统

    公开(公告)号:US5742802A

    公开(公告)日:1998-04-21

    申请号:US602653

    申请日:1996-02-16

    IPC分类号: G06F9/318 G06F9/455

    摘要: The present invention provides a method and system for using hardware to assist software in emulating the guest instructions. The method and system comprises an emulation assist unit (EAU) which efficiently maps a guest instruction to a unique tag, an index, and an address of the corresponding semantic routine. The index determines where in a cache a plurality of tags are stored. A separate cache within the EAU stores each tag in association with the address the first time the corresponding guest instruction is emulated. Thus, the emulation assist unit also dynamically responds to the set of guest instructions being emulated. The first time a guest instruction is emulated, the EAU determines the address and stores the address in the cache in association with the tag. When the guest instruction is emulated again, the EAU uses the tag to access the stored addresses of the corresponding semantic routine.

    摘要翻译: 本发明提供了一种使用硬件来辅助软件来仿真客户指令的方法和系统。 该方法和系统包括有效地将访客指令映射到唯一标签的仿真辅助单元(EAU),索引和相应语义例程的地址。 索引确定高速缓存中存储多个标签的位置。 EAU内的单独缓存器在第一次仿真相应的客户指令时,将与每个标签相关联的存储器存储在一起。 因此,仿真辅助单元还动态地响应被仿真的一组访客指令。 EAU第一次仿真客户指令时,EAU确定地址并将该地址与标签相关联存储在缓存中。 当客户指令再次被仿真时,EAU使用标签访问相应语义例程的存储地址。