Arbitration means for controlling access to a bus shared by a number of
modules
    1.
    发明授权
    Arbitration means for controlling access to a bus shared by a number of modules 失效
    用于控制对由多个模块共享的总线的访问的仲裁

    公开(公告)号:US4473880A

    公开(公告)日:1984-09-25

    申请号:US342837

    申请日:1982-01-26

    IPC分类号: G06F13/374 G06F3/00 H04J6/00

    CPC分类号: G06F13/374

    摘要: An arbitration mechanism comprising a request FIFO (408) for storing ones and zeros corresponding to received requests in the order that they are made. A one indicates that the request was made by the module in which the FIFO is located, and a zero indicates that the request was made by one of a number of other similar modules. The request status information from the other modules is received over signal lines (411) connected between the modules. This logic separates multiple requests into time-ordered slots, such that all requests in a particular time slot may be serviced before any requests in the next time slot. A store (409) stores a unique logical module number. An arbiter (410) examines this logical number bit-by-bit in successive cycles and places a one in a grant queue (412) upon the condition that the bit examined in a particular cycle is a zero and signals this condition over the signal lines. If the bit examined in a particular cycle is a one, the arbiter drops out of contention and signals this condition over the signal lines (411). This logic orders multiple requests within a single time slot, which requests are made by multiple modules, in accordance with the logical module numbers of the modules making the requests. The grant queue (412) stores status information (ones and zeros) corresponding to granted requests in the order that they are granted--a one indicating that the granted request was granted to the module in which the grant queue is located, and a zero indicating that the granted request was granted to one of the other modules. The granted request status information from the other modules is received over the signal lines (411). This logic separates multiple granted requests such that only one request corresponding to a particular module is at the head of any one grant queue at any one time.

    摘要翻译: 一种仲裁机制,包括:请求FIFO(408),用于按照它们的顺序存储对应于接收的请求的1和0。 一个表示该请求是由FIFO所在的模块进行的,零表示该请求由多个其他类似的模块之一进行。 来自其他模块的请求状态信息通过连接在模块之间的信号线(411)来接收。 该逻辑将多个请求分离成时间有序的时隙,使得特定时隙中的所有请求可以在下一个时隙中的任何请求之前被服务。 存储(409)存储唯一的逻辑模块号。 仲裁器(410)在连续循环中逐位检查该逻辑数字,并且在特定周期中检查的位为零并且在信号线上发出信号的条件下将一个放在授权队列(412)中 。 如果在特定周期中检查的比特是一个,则仲裁者退出争用,并通过信号线(411)发信号通知该条件。 该逻辑根据发出请求的模块的逻辑模块编号,在单个时隙内订购多个请求,该请求由多个模块进行。 授权队列(412)按照被许可的顺序存储对应于被许可的请求的状态信息(一个和零) - 一个指示授予的请求被授予给予授权队列所在的模块的状态信息,一个零表示 授予的请求被授予其他模块之一。 通过信号线(411)接收来自其他模块的授权请求状态信息。 该逻辑分离多个授权请求,使得只有一个对应于特定模块的请求在任何一个时间处于任何一个授权队列的头部。

    Microinstruction execution unit for use in a microprocessor
    2.
    发明授权
    Microinstruction execution unit for use in a microprocessor 失效
    用于微处理器的微指令执行单元

    公开(公告)号:US4367524A

    公开(公告)日:1983-01-04

    申请号:US119432

    申请日:1980-02-07

    CPC分类号: G06F9/226 G06F9/3824

    摘要: An execution unit which is part of a general-purpose microprocessor, partitioned between two integrated circuit chips, with the execution unit on one chip and an instruction unit on another chip. The execution unit provides the interface for accessing a main memory to thereby fetch data and macroinstructions for transfer to the instruction unit when requested to do so by the instruction unit. The execution unit receives arithmetic microinstructions in order to perform various arithmetic operations, and receives access-memory microinstructions in order to develop memory references from logical addresses received from the instruction unit. Arithmetic operations are performed by a data manipulation unit which contains registers and arithmetic capability, controlled by a math sequencer. Memory references are performed by a reference-generation unit which contains base-and-length registers and an arithmetic capability to generate and check addresses for referencing an off-chip main memory, and is controlled by an access sequencer.

    摘要翻译: 作为通用微处理器的一部分的执行单元,分配在两个集成电路芯片之间,执行单元在一个芯片上,另一个芯片上的指令单元。 执行单元提供用于访问主存储器的接口,从而当指令单元请求时,提取用于传送到指令单元的数据和宏指令。 执行单元接收算术微指令以执行各种算术运算,并且接收访问存储器微指令以便从从指令单元接收的逻辑地址开发存储器引用。 算术运算由包含由数学定序器控制的寄存器和算术能力的数据操作单元执行。 存储器引用由包含基址和长度寄存器的参考生成单元以及用于生成和检查用于引用片外主存储器的地址的算术能力执行,并且由访问定序器控制。

    Macroinstruction translator unit for use in a microprocessor
    3.
    发明授权
    Macroinstruction translator unit for use in a microprocessor 失效
    用于微处理器的宏指令翻译单元

    公开(公告)号:US4415969A

    公开(公告)日:1983-11-15

    申请号:US119433

    申请日:1980-02-07

    摘要: An instruction translator unit which receives an instruction stream from a main memory of a microprocessor, for latching data fields, for generating microinstructions necessary to emulate the function encoded in an instruction, and for transferring the data and microinstructions to a microinstruction execution unit over an output bus. The instruction unit includes an instruction decoder (ID) which interprets the fields of received instructions and generates single forced microinstructions and starting addresses of multiple-microinstruction routines. A microinstruction sequencer (MIS) accepts the forced microinstructions and the starting addresses and places on the output bus correct microinstruction sequences necessary to execute the received instruction. The microinstruction routines are stored in a read-only memory (ROM) in the MIS. The starting addresses received from the ID are used to index into and to fetch these microinstructions from the ROM. Forced microinstructions bypass the ROM and are transferred directly by the MIS to the execution unit.The ID processes macroinstructions comprised of variable bit length fields by utilizing an extractor in conjunction with a bit pointer (BIP) for stripping off the bits comprising a particular field. The extracted field is presented to a state machine which decodes the particular field and generates data, microinstructions and starting addresses relating to the particular field for use by the MIS. The state machine then updates the BIP by the bit count of the particular field so that it points to the next field to be extracted.

    摘要翻译: 指令转换器单元,其接收来自微处理器的主存储器的指令流,用于锁存数据字段,用于产生模拟在指令中编码的功能所必需的微指令,以及用于通过输出将数据和微指令传送到微指令执行单元 总线。 指令单元包括解释接收到的指令的字段的指令解码器(ID),并且生成单个强制微指令和多重微指令程序的起始地址。 微指令定序器(MIS)接受强制微指令和输出总线上的起始地址和位置,正确执行接收指令所需的微指令序列。 微指令例程存储在MIS中的只读存储器(ROM)中。 从ID接收的起始地址用于索引并从ROM获取这些微指令。 强制微指令绕过ROM,并由MIS直接传输到执行单元。 ID通过利用提取器结合位指针(BIP)来处理由可变比特长度字段组成的宏指令,以剥离包含特定字段的比特。 提取的字段被呈现给状态机,该状态机对特定字段进行解码,并生成与MIS使用的特定字段相关的数据,微指令和起始地址。 然后,状态机通过特定字段的位计数来更新BIP,使得它指向要提取的下一个字段。

    Data processing system
    4.
    发明授权
    Data processing system 失效
    数据处理系统

    公开(公告)号:US4325120A

    公开(公告)日:1982-04-13

    申请号:US971661

    申请日:1978-12-21

    摘要: A data processor architecture wherein the processors recognize two basic types of objects, an object being a representation of related information maintained in a contiguously-addresed set of memory locations. The first type of object contains ordinary data, such as characters, integers, reals, etc. The second type of object contains a list of access descriptors. Each access descriptor provides information for locating and defining the extent of access to an object associated with that access descriptor. The processors recognize complex objects that are combinations of objects of the basic types. One such complex object (a context) defines an environment for execution of objects accessible to a given instance of a procedural operation. The dispatching of tasks to the processors is accomplished by hardware-controlled queuing mechanisms (dispatching-port objects) which allow multiple sets of processors to serve multiple, but independent sets of tasks. Communication between asynchronous tasks or processes is accomplished by related hardware-controlled queuing mechanisms (buffered-port objects) which allow messages to move between internal processes or input/output processes without the need for interrupts. A mechanism is provided which allows the processors to communicate with each other. This mechanism is used to reawaken an idle processor to alert the processor to the fact that a ready-to-run process at a dispatching port needs execution.

    摘要翻译: 一种数据处理器架构,其中处理器识别两种基本类型的对象,对象是维持在连续存储的存储器位置集合中的相关信息的表示。 第一种类型的对象包含普通数据,例如字符,整数,序列等。第二类对象包含访问描述符列表。 每个访问描述符提供用于定位和定义对与该访问描述符相关联的对象的访问范围的信息。 处理器识别作为基本类型的对象的组合的复杂对象。 一个这样的复杂对象(上下文)定义了用于执行程序操作的给定实例可访问的对象的环境。 通过硬件控制的排队机制(调度端口对象)来实现将任务分配到处理器,这些机制允许多组处理器提供多个但独立的任务集。 异步任务或进程之间的通信由相关硬件控制的排队机制(缓冲端口对象)完成,这些机制允许消息在内部进程或输入/输出进程之间移动,而不需要中断。 提供了允许处理器彼此通信的机制。 该机制用于重新唤醒空闲处理器以提醒处理器,即调度端口上的即时运行进程需要执行。

    High performance computer system
    5.
    发明授权
    High performance computer system 失效
    高性能计算机系统

    公开(公告)号:US5113523A

    公开(公告)日:1992-05-12

    申请号:US731170

    申请日:1985-05-06

    IPC分类号: G06F15/173

    CPC分类号: G06F15/17343

    摘要: A parallel processor comprised of a plurality of processing nodes (10), each node including a processor (100-114) and a memory (116). Each processor includes means (100, 102) for executing instructions, logic means (114) connected to the memory for interfacing the processor with the memory and means (112) for internode communication. The internode communication means (112) connect the nodes to form a first array (8) of order n having a hypercube topology. A second array (21) of order n having nodes (22) connected together in a hypercube topology is interconnected with the first array to form an order n+l array. The order n+l array is made up of the first and second arrays of order n, such that a parallel processor system may be structured with any number of processors that is a power of two. A set of I/O processors (24) are connected to the nodes of the arrays (8, 21) by means of I/O channels (106). The means for internode communication (112) comprises a serial data channel driven by a clock that is common to all of the nodes.

    摘要翻译: 一种由多个处理节点(10)组成的并行处理器,每个节点包括处理器(100-114)和存储器(116)。 每个处理器包括用于执行指令的装置(100,102),连接到存储器的用于将处理器与存储器接口的逻辑装置(114)和用于节点间通信的装置(112)。 节间通信装置(112)连接节点以形成具有超立方体拓扑的n阶的第一阵列(8)。 在超立方体拓扑中连接在一起的具有节点(22)的阶数为n的第二阵列(21)与第一阵列互连以形成阶n + l阵列。 顺序n + l阵列由阶数为n的第一和第二阵列组成,使得并行处理器系统可以被构造成具有两个幂的任意数目的处理器。 一组I / O处理器(24)通过I / O通道(106)连接到阵列(8,21)的节点。 用于节间通信的装置(112)包括由所有节点共同的时钟驱动的串行数据信道。

    Broadcast instruction for use in a high performance computer system
    6.
    发明授权
    Broadcast instruction for use in a high performance computer system 失效
    用于高性能计算机系统的广播指令

    公开(公告)号:US4729095A

    公开(公告)日:1988-03-01

    申请号:US864596

    申请日:1986-05-19

    摘要: A broadcast pointer instruction has a first source operand (address pointer value) which is the starting address in a memory of message data to be broadcast to a number of processors through output ports. The broadcast pointer instruction has a first destination operand (first multibit mask), there being one bit position in the first mask for each one of the plurality of output ports. The address pointer value is loaded into each of the output ports whose numbers correspond to bit positions in the first mask that are set to be one, such that each output port that is designated in the first mask receives the starting address of the message data in the memory. A broadcast count instruction has a second source operand (a byte count value) equal to the number of bytes in the message data. The broadcast count instruction has a second destination operand (a second multibit mask), there being one bit position in the second mask for each one of the plurality of output ports. The byte count value is sent to each of the output ports whose numbers correspond to bit positions in the second mask register that are set to be one, such that each output port that is designated in the second mask receives the byte count value corresponding to the number of bytes in the message data that are to be transferred from the memory. Once the byte count is initialized, data are transferred from the starting address in memory over each output port designated in the masks, until the byte count is decremented to zero.

    摘要翻译: 广播指针指令具有第一源操作数(地址指针值),其是要通过输出端口广播到多个处理器的消息数据的存储器中的起始地址。 广播指针指令具有第一目的地操作数(第一多位掩码),在多个输出端口中的每个输出端口的第一掩码中存在一个位位置。 地址指针值被加载到每个输出端口,其数量对应于第一掩码中的位位置被设置为1,使得在第一掩码中指定的每个输出端口接收消息数据的起始地址 记忆。 广播计数指令具有等于消息数据中的字节数的第二源操作数(字节计数值)。 广播计数指令具有第二目的地操作数(第二多位掩码),在多个输出端口中的每个输出端口的第二掩码中存在一位位置。 字节计数值被发送到每个输出端口,其数量对应于第二屏蔽寄存器中被设置为1的位位置,使得在第二掩码中指定的每个输出端口接收对应于 要从存储器传输的消息数据中的字节数。 一旦字节计数被初始化,数据从掩码中指定的每个输出端口的存储器中的起始地址传送,直到字节计数递减为零。

    Hypercube processor network in which the processor indentification
numbers of two processors connected to each other through port number
n, vary only in the nth bit
    7.
    发明授权
    Hypercube processor network in which the processor indentification numbers of two processors connected to each other through port number n, vary only in the nth bit 失效
    通过端口号n彼此连接的两个处理器的处理器标识号的Hypercube处理器网络仅在第n位

    公开(公告)号:US5367636A

    公开(公告)日:1994-11-22

    申请号:US144544

    申请日:1993-11-01

    CPC分类号: G06F15/17343

    摘要: A parallel processor network comprised of a plurality of nodes, each node including a processor containing a number of I/O ports, and a local memory. Each processor in the network is assigned a unique processor ID (202) such that the processor IDs of two processors connected to each other through port number n, vary only in the nth bit. Input message decoding means (200) and compare logic and message routing logic (204) create a message path through the processor in response to the decoding of an address message packet and remove the message path in response to the decoding of an end of transmission (EOT) Packet. Each address message packet includes a Forward bit used to send a message to a remote destination either within the network or to a foreign network. Each address packet includes Node Address bits that contain the processor ID of the destination node, it the destination node is in the local network. If the destination node is in a foreign network space, the destination node must be directly connected to a node in the local network space. In this case, the Node Address bits contain the processor ID of the local node connected to the destination node. Path creation means in said processor node compares the masked node address with its own processor ID and sends the address packet out the port number corresponding to the bit position of the first difference between the masked node address and its own processor ID, starting at bit n+1, where n is the number of the port on which the message was received.

    摘要翻译: 由多个节点组成的并行处理器网络,每个节点包括包含多个I / O端口的处理器和本地存储器。 网络中的每个处理器被分配唯一的处理器ID(202),使得通过端口号n彼此连接的两个处理器的处理器ID仅在第n位变化。 输入消息解码装置(200)并且比较逻辑和消息路由逻辑(204)响应于地址消息分组的解码而创建通过处理器的消息路径,并且响应于传输结束的解码而移除消息路径( EOT)数据包。 每个地址消息分组包括用于在网络内或外部网络向远程目的地发送消息的转发位。 每个地址分组包括包含目的地节点的处理器ID的节点地址位,目的地节点在本地网络中。 如果目标节点在外部网络空间中,目标节点必须直接连接到本地网络空间中的节点。 在这种情况下,节点地址位包含连接到目标节点的本地节点的处理器ID。 所述处理器节点中的路径创建装置将被屏蔽的节点地址与其自己的处理器ID进行比较,并从地址分组开始,将与掩蔽的节点地址和其自己的处理器ID之间的第一个差的位位置相对应的端口号发送给地址分组 +1,其中n是接收消息的端口的编号。