Abstract:
A master-slave processor interface protocol transfers a plurality of instructions from a master processor to a slave processor. Each instruction has an opcode and a set of operands. The interface includes a micro-engine which sends the opcode for each of the instructions to be executed to the slave processor and stores the opcode in a first buffer in the slave processor. A second micro-engine operates the master processor to fetch and process the set of operands for each of the instructions to be executed by the slave processor in the order of the opcode delivery to the first buffer. A third micro-engine delivers a signal to the slave processor when the master processor is ready to deliver the operands for an instruction. The opcode associated with the operands ready to be delivered is then moved from the first buffer to a second buffer upon receiving the signal from the master processor. The processed set of operands are then sent to the second buffer and the instruction is executed. Finally, any opcodes in the first buffer having a set of operands which were not delivered in their proper order are invalidated when a new opcode is sent to the first buffer. This allows pre-decoding to begin on the opcodes in the slave processor thus reducing the overhead of the instruction execution.
Abstract:
A method and arrangement for siloing information in a computer system uses a smaller number of large-size latches by providing a timing silo having a set of n timing state devices sequentially connected for receiving and siloing at least one bit. The arrangement has an information silo having a set of p information state devices which are sequentially connected for receiving and siloing information. These information state devices have device enables coupled to separate locations in the timing silo so that a bit at a particular location in the timing silo enables the information state device which is coupled to that particular location. In this arrangement, the number of p information state devices is less than the number n of timing state devices. Less large-size latches are therefore needed. The invention also finds use in the resetting of a control module in processor after a trap by providing a timing silo which keeps track of the number of addresses which have been generated within the trap shadow. Upon receiving a signal that a trap has occurred, a total number of addresses generated within the trap shadow is indicated by the timing silo and a uniform stride is subtracted from a current address until the trap causing address is reached. By this arrangement, a large number of large-size latches are not needed to silo all of the virtual addresses which are in the trap shadow. Instead, only one bit needs to be siloed in the timing silo since the addresses have a uniform stride.
Abstract:
A protected test strip holder according to the present invention includes a test strip holder onto which a test strip, such as for example an Almen strip, may be mounted using fasteners provided on the test strip holder. A protective covering is form-fitted to the test strip holder and, optionally, to the test strip holder having a test strip mounted thereon. The present invention also comprises a method for protecting and storing a test strip holder that includes providing a test strip holder, forming or molding a protective covering form-fitting to the test strip holder, and placing the protective covering over the test strip holder. The test strip holder may have a test strip mounted thereon prior to molding the form-fitting protective covering.
Abstract:
During the operation of a computer system whose processor is supported by virtual cache memory, the cache must be cleared and refilled to allow the replacement of old data with more current data. The cache is filled with either P or N (N>P) blocks of data. Numerous methods for dynamically selecting N or P blocks of data are possible. For instance, immediately after the cache has been flushed, the miss is refilled with N blocks, moving data to the cache at high speed. Once the cache is mostly full, the miss tends to be refilled with P blocks. This maintains the currency of the data in the cache, while simultaneously avoiding writing-over of data already in the cache. The invention is useful in a multi-user/multi-tasking system where the program being run changes frequently, necessitating flushing and clearing the cache frequently.