Arbitration means for controlling access to a bus shared by a number of
modules
    1.
    发明授权
    Arbitration means for controlling access to a bus shared by a number of modules 失效
    用于控制对由多个模块共享的总线的访问的仲裁

    公开(公告)号:US4473880A

    公开(公告)日:1984-09-25

    申请号:US342837

    申请日:1982-01-26

    IPC分类号: G06F13/374 G06F3/00 H04J6/00

    CPC分类号: G06F13/374

    摘要: An arbitration mechanism comprising a request FIFO (408) for storing ones and zeros corresponding to received requests in the order that they are made. A one indicates that the request was made by the module in which the FIFO is located, and a zero indicates that the request was made by one of a number of other similar modules. The request status information from the other modules is received over signal lines (411) connected between the modules. This logic separates multiple requests into time-ordered slots, such that all requests in a particular time slot may be serviced before any requests in the next time slot. A store (409) stores a unique logical module number. An arbiter (410) examines this logical number bit-by-bit in successive cycles and places a one in a grant queue (412) upon the condition that the bit examined in a particular cycle is a zero and signals this condition over the signal lines. If the bit examined in a particular cycle is a one, the arbiter drops out of contention and signals this condition over the signal lines (411). This logic orders multiple requests within a single time slot, which requests are made by multiple modules, in accordance with the logical module numbers of the modules making the requests. The grant queue (412) stores status information (ones and zeros) corresponding to granted requests in the order that they are granted--a one indicating that the granted request was granted to the module in which the grant queue is located, and a zero indicating that the granted request was granted to one of the other modules. The granted request status information from the other modules is received over the signal lines (411). This logic separates multiple granted requests such that only one request corresponding to a particular module is at the head of any one grant queue at any one time.

    摘要翻译: 一种仲裁机制,包括:请求FIFO(408),用于按照它们的顺序存储对应于接收的请求的1和0。 一个表示该请求是由FIFO所在的模块进行的,零表示该请求由多个其他类似的模块之一进行。 来自其他模块的请求状态信息通过连接在模块之间的信号线(411)来接收。 该逻辑将多个请求分离成时间有序的时隙,使得特定时隙中的所有请求可以在下一个时隙中的任何请求之前被服务。 存储(409)存储唯一的逻辑模块号。 仲裁器(410)在连续循环中逐位检查该逻辑数字,并且在特定周期中检查的位为零并且在信号线上发出信号的条件下将一个放在授权队列(412)中 。 如果在特定周期中检查的比特是一个,则仲裁者退出争用,并通过信号线(411)发信号通知该条件。 该逻辑根据发出请求的模块的逻辑模块编号,在单个时隙内订购多个请求,该请求由多个模块进行。 授权队列(412)按照被许可的顺序存储对应于被许可的请求的状态信息(一个和零) - 一个指示授予的请求被授予给予授权队列所在的模块的状态信息,一个零表示 授予的请求被授予其他模块之一。 通过信号线(411)接收来自其他模块的授权请求状态信息。 该逻辑分离多个授权请求,使得只有一个对应于特定模块的请求在任何一个时间处于任何一个授权队列的头部。

    Interface for use between a memory and components of a module switching
apparatus

    公开(公告)号:US4480307A

    公开(公告)日:1984-10-30

    申请号:US336866

    申请日:1982-01-04

    IPC分类号: G06F13/16 G06F13/00

    CPC分类号: G06F13/1615

    摘要: A number of intelligent bus interface units (100) are provided in a matrix of orthogonal lines interconnecting processor modules (110) and memory control unit (MCU) modules (112). The matrix is composed of processor buses (105) and corresponding control lines; and memory buses (107) with corresponding control lines (108). At the intersection of these lines is a bus interface unit node (100). The bus interface units function to pass memory requests from a processor module to a memory module attached to an MCU node and to pass any data associated with the requests. The memory bus is a packet-oriented bus. Accesses are handled by means of a series of messages transmitted by message generator (417) in accordance with a specific control protocol. Packets comprising one or more bus transmission slots are issued sequentially and contiguously. Each slot in a packet includes an opcode, address, data, control, and parity-check bits. Write-request packets and read-request packets are issued to the memory-control unit. The memory-control unit responds with reply packets. A message controller (416), bus monitor (413), and pipeline and reply monitor ( 414), run the memory bus in a three-level pipeline mode. There may be three outstanding requests in the bus pipeline. Any further requests must wait for a reply message to free-up a slot in the pipeline before proceeding. Request messages increase the length of the pipeline and reply messages decrease the length of a pipeline. A control message, called a blurb, does not affect the pipeline length and can be issued when the pipeline is not full. The different messages are distinguished by three control signals (405) that parallel the data portion of the bus. The message generator (417) and interface logic (404) drive these control lines to indicate the message type, the start and end of the message, and possible error conditions. The pipeline and reply monitor (414) and the message controller (416) cooperate to insert a reply to a particular request in the pipeline position corresponding to the particular request that invoked the reply.

    Hypercube processor network in which the processor indentification
numbers of two processors connected to each other through port number
n, vary only in the nth bit
    3.
    发明授权
    Hypercube processor network in which the processor indentification numbers of two processors connected to each other through port number n, vary only in the nth bit 失效
    通过端口号n彼此连接的两个处理器的处理器标识号的Hypercube处理器网络仅在第n位

    公开(公告)号:US5367636A

    公开(公告)日:1994-11-22

    申请号:US144544

    申请日:1993-11-01

    CPC分类号: G06F15/17343

    摘要: A parallel processor network comprised of a plurality of nodes, each node including a processor containing a number of I/O ports, and a local memory. Each processor in the network is assigned a unique processor ID (202) such that the processor IDs of two processors connected to each other through port number n, vary only in the nth bit. Input message decoding means (200) and compare logic and message routing logic (204) create a message path through the processor in response to the decoding of an address message packet and remove the message path in response to the decoding of an end of transmission (EOT) Packet. Each address message packet includes a Forward bit used to send a message to a remote destination either within the network or to a foreign network. Each address packet includes Node Address bits that contain the processor ID of the destination node, it the destination node is in the local network. If the destination node is in a foreign network space, the destination node must be directly connected to a node in the local network space. In this case, the Node Address bits contain the processor ID of the local node connected to the destination node. Path creation means in said processor node compares the masked node address with its own processor ID and sends the address packet out the port number corresponding to the bit position of the first difference between the masked node address and its own processor ID, starting at bit n+1, where n is the number of the port on which the message was received.

    摘要翻译: 由多个节点组成的并行处理器网络,每个节点包括包含多个I / O端口的处理器和本地存储器。 网络中的每个处理器被分配唯一的处理器ID(202),使得通过端口号n彼此连接的两个处理器的处理器ID仅在第n位变化。 输入消息解码装置(200)并且比较逻辑和消息路由逻辑(204)响应于地址消息分组的解码而创建通过处理器的消息路径,并且响应于传输结束的解码而移除消息路径( EOT)数据包。 每个地址消息分组包括用于在网络内或外部网络向远程目的地发送消息的转发位。 每个地址分组包括包含目的地节点的处理器ID的节点地址位,目的地节点在本地网络中。 如果目标节点在外部网络空间中,目标节点必须直接连接到本地网络空间中的节点。 在这种情况下,节点地址位包含连接到目标节点的本地节点的处理器ID。 所述处理器节点中的路径创建装置将被屏蔽的节点地址与其自己的处理器ID进行比较,并从地址分组开始,将与掩蔽的节点地址和其自己的处理器ID之间的第一个差的位位置相对应的端口号发送给地址分组 +1,其中n是接收消息的端口的编号。

    High performance computer system
    4.
    发明授权
    High performance computer system 失效
    高性能计算机系统

    公开(公告)号:US5113523A

    公开(公告)日:1992-05-12

    申请号:US731170

    申请日:1985-05-06

    IPC分类号: G06F15/173

    CPC分类号: G06F15/17343

    摘要: A parallel processor comprised of a plurality of processing nodes (10), each node including a processor (100-114) and a memory (116). Each processor includes means (100, 102) for executing instructions, logic means (114) connected to the memory for interfacing the processor with the memory and means (112) for internode communication. The internode communication means (112) connect the nodes to form a first array (8) of order n having a hypercube topology. A second array (21) of order n having nodes (22) connected together in a hypercube topology is interconnected with the first array to form an order n+l array. The order n+l array is made up of the first and second arrays of order n, such that a parallel processor system may be structured with any number of processors that is a power of two. A set of I/O processors (24) are connected to the nodes of the arrays (8, 21) by means of I/O channels (106). The means for internode communication (112) comprises a serial data channel driven by a clock that is common to all of the nodes.

    摘要翻译: 一种由多个处理节点(10)组成的并行处理器,每个节点包括处理器(100-114)和存储器(116)。 每个处理器包括用于执行指令的装置(100,102),连接到存储器的用于将处理器与存储器接口的逻辑装置(114)和用于节点间通信的装置(112)。 节间通信装置(112)连接节点以形成具有超立方体拓扑的n阶的第一阵列(8)。 在超立方体拓扑中连接在一起的具有节点(22)的阶数为n的第二阵列(21)与第一阵列互连以形成阶n + l阵列。 顺序n + l阵列由阶数为n的第一和第二阵列组成,使得并行处理器系统可以被构造成具有两个幂的任意数目的处理器。 一组I / O处理器(24)通过I / O通道(106)连接到阵列(8,21)的节点。 用于节间通信的装置(112)包括由所有节点共同的时钟驱动的串行数据信道。

    Broadcast instruction for use in a high performance computer system
    5.
    发明授权
    Broadcast instruction for use in a high performance computer system 失效
    用于高性能计算机系统的广播指令

    公开(公告)号:US4729095A

    公开(公告)日:1988-03-01

    申请号:US864596

    申请日:1986-05-19

    摘要: A broadcast pointer instruction has a first source operand (address pointer value) which is the starting address in a memory of message data to be broadcast to a number of processors through output ports. The broadcast pointer instruction has a first destination operand (first multibit mask), there being one bit position in the first mask for each one of the plurality of output ports. The address pointer value is loaded into each of the output ports whose numbers correspond to bit positions in the first mask that are set to be one, such that each output port that is designated in the first mask receives the starting address of the message data in the memory. A broadcast count instruction has a second source operand (a byte count value) equal to the number of bytes in the message data. The broadcast count instruction has a second destination operand (a second multibit mask), there being one bit position in the second mask for each one of the plurality of output ports. The byte count value is sent to each of the output ports whose numbers correspond to bit positions in the second mask register that are set to be one, such that each output port that is designated in the second mask receives the byte count value corresponding to the number of bytes in the message data that are to be transferred from the memory. Once the byte count is initialized, data are transferred from the starting address in memory over each output port designated in the masks, until the byte count is decremented to zero.

    摘要翻译: 广播指针指令具有第一源操作数(地址指针值),其是要通过输出端口广播到多个处理器的消息数据的存储器中的起始地址。 广播指针指令具有第一目的地操作数(第一多位掩码),在多个输出端口中的每个输出端口的第一掩码中存在一个位位置。 地址指针值被加载到每个输出端口,其数量对应于第一掩码中的位位置被设置为1,使得在第一掩码中指定的每个输出端口接收消息数据的起始地址 记忆。 广播计数指令具有等于消息数据中的字节数的第二源操作数(字节计数值)。 广播计数指令具有第二目的地操作数(第二多位掩码),在多个输出端口中的每个输出端口的第二掩码中存在一位位置。 字节计数值被发送到每个输出端口,其数量对应于第二屏蔽寄存器中被设置为1的位位置,使得在第二掩码中指定的每个输出端口接收对应于 要从存储器传输的消息数据中的字节数。 一旦字节计数被初始化,数据从掩码中指定的每个输出端口的存储器中的起始地址传送,直到字节计数递减为零。

    Microprocessor providing an interface between a peripheral subsystem and
an object-oriented data processor
    6.
    发明授权
    Microprocessor providing an interface between a peripheral subsystem and an object-oriented data processor 失效
    微处理器提供外围子系统和面向对象的数据处理器之间的接口

    公开(公告)号:US4407016A

    公开(公告)日:1983-09-27

    申请号:US235470

    申请日:1981-02-18

    摘要: A microprocessor receives addresses and data from a peripheral subsystem for use in subsequently accessing portions of the main memory of a data processing system in a controlled and protected manner. Each of the addresses is used to interrogate an associative memory to determine if the address falls within one of the subranges for a "window" on the main memory address space. If the address matches, then it is used to develop a corresponding address on the main memory address space. The data associated with the peripheral subsystem address is then passed through the interface and into the main memory at the translated memory address. Data transfer is improved by buffering blocks of data on the microprocessor. Data bytes are written into the buffer at a slower rate than data blocks are read out of the buffer and into main memory. A buffer bypass register allows single bytes of data to be transferred to a single address by bypassing the buffer. Address development and memory response signals are generated by the microprocessor rather than the peripheral subsystem processor for block transfers.

    摘要翻译: 微处理器从外围子系统接收地址和数据,用于随后以受控和受保护的方式访问数据处理系统的主存储器的部分。 每个地址用于询问关联存储器以确定地址是否落在主存储器地址空间上的“窗口”的子范围内。 如果地址匹配,则用于在主存储器地址空间上开发相应的地址。 然后,与外围子系统地址相关联的数据通过接口传递到转换的存储器地址的主存储器中。 通过缓冲微处理器上的数据块来改善数据传输。 数据字节以比数据块从缓冲器读出并进入主存储器的速率慢的速率写入缓冲器。 缓冲旁路寄存器允许通过绕过缓冲区将单个字节的数据传输到单个地址。 地址开发和存储器响应信号由微处理器产生,而不是用于块传输的外围子系统处理器。

    Phase-multiplexed CCD transversal filter
    7.
    发明授权
    Phase-multiplexed CCD transversal filter 失效
    相位复用CCD横向滤波器

    公开(公告)号:US4243958A

    公开(公告)日:1981-01-06

    申请号:US33361

    申请日:1979-04-26

    申请人: Doran K. Wilde

    发明人: Doran K. Wilde

    IPC分类号: H03H15/02 G11C19/28 G11C27/00

    CPC分类号: H03H15/02

    摘要: A phase multiplexed CCD transversal filter includes N substantially identical parallel-connected CCD's which acquire samples in a predetermined consecutive order over a given clock cycle so that the apparent sampling frequency is equal to N times the clock frequency. The output taps of the CCD's are weighted in a predetermined manner to provide a filter having a predetermined transfer function.

    摘要翻译: 相位多路复用CCD横向滤波器包括N个基本上相同的并联CCD,其在给定的时钟周期内以预定的连续顺序获取采样,使得表观采样频率等于时钟频率的N倍。 以预定的方式对CCD的输出抽头进行加权,以提供具有预定传递函数的滤波器。