High speed transfer of instructions from a master to a slave processor
    1.
    发明授权
    High speed transfer of instructions from a master to a slave processor 失效
    从主机到从属处理器的指令高速传输

    公开(公告)号:US5210834A

    公开(公告)日:1993-05-11

    申请号:US840607

    申请日:1992-02-20

    CPC classification number: G06F9/3881 G06F15/17

    Abstract: A master-slave processor interface protocol transfers a plurality of instructions from a master processor to a slave processor. Each instruction has an opcode and a set of operands. The interface includes a micro-engine which sends the opcode for each of the instructions to be executed to the slave processor and stores the opcode in a first buffer in the slave processor. A second micro-engine operates the master processor to fetch and process the set of operands for each of the instructions to be executed by the slave processor in the order of the opcode delivery to the first buffer. A third micro-engine delivers a signal to the slave processor when the master processor is ready to deliver the operands for an instruction. The opcode associated with the operands ready to be delivered is then moved from the first buffer to a second buffer upon receiving the signal from the master processor. The processed set of operands are then sent to the second buffer and the instruction is executed. Finally, any opcodes in the first buffer having a set of operands which were not delivered in their proper order are invalidated when a new opcode is sent to the first buffer. This allows pre-decoding to begin on the opcodes in the slave processor thus reducing the overhead of the instruction execution.

    Abstract translation: 主从处理器接口协议将多个指令从主处理器传送到从处理器。 每个指令都有一个操作码和一组操作数。 接口包括微引擎,其将要执行的每个指令的操作码发送到从属处理器,并将操作码存储在从属处理器中的第一缓冲器中。 第二微引擎操作主处理器以从操作器执行的每个指令获取并处理操作数集合,其中操作数按照操作码传送到第一缓冲器的顺序。 当主处理器准备好传递指令的操作数时,第三个微引擎向从属处理器传送信号。 然后,当接收到来自主处理器的信号时,与准备传送的操作数相关联的操作码随后从第一缓冲器移动到第二缓冲器。 然后将经处理的操作数集发送到第二个缓冲区,并执行指令。 最后,当一个新的操作码发送到第一个缓冲区时,第一个缓冲区中的任何一个具有一系列操作数的操作码都没有按照正确的顺序传送。 这允许在从属处理器中的操作码上进行预解码,从而减少指令执行的开销。

    System with a N stages timing silo and P stages information silo for
soloing information
    2.
    发明授权
    System with a N stages timing silo and P stages information silo for soloing information 失效
    系统具有N个阶段时间仓库和P个阶段信息仓库,用于独立信息

    公开(公告)号:US4958274A

    公开(公告)日:1990-09-18

    申请号:US201481

    申请日:1988-06-01

    CPC classification number: G06F15/8084

    Abstract: A method and arrangement for siloing information in a computer system uses a smaller number of large-size latches by providing a timing silo having a set of n timing state devices sequentially connected for receiving and siloing at least one bit. The arrangement has an information silo having a set of p information state devices which are sequentially connected for receiving and siloing information. These information state devices have device enables coupled to separate locations in the timing silo so that a bit at a particular location in the timing silo enables the information state device which is coupled to that particular location. In this arrangement, the number of p information state devices is less than the number n of timing state devices. Less large-size latches are therefore needed. The invention also finds use in the resetting of a control module in processor after a trap by providing a timing silo which keeps track of the number of addresses which have been generated within the trap shadow. Upon receiving a signal that a trap has occurred, a total number of addresses generated within the trap shadow is indicated by the timing silo and a uniform stride is subtracted from a current address until the trap causing address is reached. By this arrangement, a large number of large-size latches are not needed to silo all of the virtual addresses which are in the trap shadow. Instead, only one bit needs to be siloed in the timing silo since the addresses have a uniform stride.

    Abstract translation: 用于在计算机系统中扫描信息的方法和装置使用较小数量的大尺寸锁存器,通过提供具有一组连续的n个定时状态装置的定时仓,用于接收和摆放至少一个位。 该装置具有信息仓,该信息仓具有一组p信息状态设备,它们被顺序地连接以用于接收和发送信息。 这些信息状态设备具有能够耦合到定时仓中的分离位置的设备,使得定时仓中的特定位置处的位使能耦合到该特定位置的信息状态设备。 在这种布置中,p信息状态设备的数量小于定时状态设备的数量n。 因此需要较小的大尺寸锁存器。 本发明还可用于在陷阱之后通过提供记录在陷阱阴影内产生的地址数量的定时仓来重置陷阱中的处理器中的控制模块。 在接收到发生陷阱的信号时,陷阱阴影中产生的地址总数由定时仓表示,并且从当前地址中减去均匀的步幅,直到到达陷阱引起的地址。 通过这种布置,不需要大量的大尺寸锁存器来扫描处于陷阱阴影中的所有虚拟地址。 相反,只有一个位需要在定时仓中被清除,因为地址具有均匀的步幅。

    APPARATUS AND METHOD FOR PROTECTING MOUNTED ALMEN STRIPS
    3.
    发明申请
    APPARATUS AND METHOD FOR PROTECTING MOUNTED ALMEN STRIPS 审中-公开
    用于保护安装的ALMEN STRIPS的装置和方法

    公开(公告)号:US20120199506A1

    公开(公告)日:2012-08-09

    申请号:US13365896

    申请日:2012-02-03

    Abstract: A protected test strip holder according to the present invention includes a test strip holder onto which a test strip, such as for example an Almen strip, may be mounted using fasteners provided on the test strip holder. A protective covering is form-fitted to the test strip holder and, optionally, to the test strip holder having a test strip mounted thereon. The present invention also comprises a method for protecting and storing a test strip holder that includes providing a test strip holder, forming or molding a protective covering form-fitting to the test strip holder, and placing the protective covering over the test strip holder. The test strip holder may have a test strip mounted thereon prior to molding the form-fitting protective covering.

    Abstract translation: 根据本发明的受保护的测试条保持器包括测试条保持器,可以使用设置在测试条保持器上的紧固件将测试条(例如Almen条)安装在该测试条保持器上。 保护罩与测试条保持器成形,并且可选地安装到具有安装在其上的测试条的测试条保持器。 本发明还包括一种用于保护和储存测试条保持器的方法,该方法包括提供测试条保持器,将形成或模制保护覆盖物形状配件的测试条保持器,以及将保护覆盖物放置在测试条保持器上。 在模制成形配件保护罩之前,测试条保持器可以具有安装在其上的测试条。

    Cache with at least two fill rates
    4.
    发明授权
    Cache with at least two fill rates 失效
    缓存至少有两个填充率

    公开(公告)号:US5038278A

    公开(公告)日:1991-08-06

    申请号:US611337

    申请日:1990-11-09

    CPC classification number: G06F12/0842

    Abstract: During the operation of a computer system whose processor is supported by virtual cache memory, the cache must be cleared and refilled to allow the replacement of old data with more current data. The cache is filled with either P or N (N>P) blocks of data. Numerous methods for dynamically selecting N or P blocks of data are possible. For instance, immediately after the cache has been flushed, the miss is refilled with N blocks, moving data to the cache at high speed. Once the cache is mostly full, the miss tends to be refilled with P blocks. This maintains the currency of the data in the cache, while simultaneously avoiding writing-over of data already in the cache. The invention is useful in a multi-user/multi-tasking system where the program being run changes frequently, necessitating flushing and clearing the cache frequently.

    Abstract translation: 在处理器由虚拟高速缓冲存储器支持的计算机系统的操作期间,必须清除缓存并重新填充以允许用更多当前数据替换旧数据。 高速缓存用P或N(N> P)数据块填充。 用于动态选择N或P个数据块的许多方法是可能的。 例如,在高速缓冲存储器被刷新之后,错误将被重新填充N个块,将数据高速移动到高速缓存。 一旦缓存大部分已满,则错误将被重新填充P块。 这将保持高速缓存中的数据的货币,同时避免缓存中已经存在的数据的写入。 本发明在运行程序频繁变化的多用户/多任务系统中是有用的,需要频繁地刷新和清除缓存。

Patent Agency Ranking