摘要:
Method and apparatus for producing the residue of the product of a multiplier and a multiplicand where the multiplier, multiplicand and product are residues with respect to a check base m, and where m=(2.sup.b -1) and b is the number of bits in a residue. An addressable memory device has at least 2 2(b-1) memory locations with each memory location having an address of 2 (b-1) bits. The address of each memory location can be considered as having two components each of (b-1) bits. The residue stored at each addressable location of the device is the residue of the product of the two components of its address. In response to each address being applied to the memory device, the residue of the product of the two components stored at the addressed memory location is read out of the device. The lower order (b-1) bits of the multiplier is applied to the device if the most significant bit of the multiplier is a logical zero. If the most significant bit of the multiplier is a logical one, the complement of the lower order (b-1) bits is applied and forms one component of the address of a memory location of the device. Similarly, the value of the most significant bit of the multiplicand determines whether the lower order (b-1) bits of the multiplicand or their complements form the other component of the address applied to the memory device. The residue read out of the addressed location is complemented to produce the residue of the product stored at the addressed memory location if and only if one of the most significant bits of the multiplier and multiplicand is a logical one, otherwise the residue read out of the memory device is the residue of the product of the multiplier and the multiplicand.
摘要:
A collector for the results of a pipelined central processing unit of a digital data processing system. The processor has a plurality of execution units, with each execution unit executing a different set of instructions of the instruction repertoire of the processor. The execution units execute instructions issued to them in order of issuance by the pipeline and in parallel. As instructions are issued to the execution units, the operation code identifying each instruction is also issued in program order to an instruction execution queue of the collector. The results of the execution of each instruction by an execution unit are stored in a result stack associated with each execution unit. Collector control causes the results of the execution of instructions to program visible registers to be stored in a master safe store register in program order which is determined by the order of instructions stored in the instruction execution stack on a first-in, first-out basis. The collector also issues write commands to write results of the execution of instructions into memory in program order.
摘要:
A processor (10) has a data cache unit (16) wherein the data cache unit includes a memory management unit (MMU) (32). The MMU contains memory locations within transparent translation registers (TTRs), an address translation cache (40), or a table walk controller (42) which store or generate cache mode (CM) bits which indicate whether a memory access (i.e., a write operation) is precise or imprecise. Precise operations require that a first write operation or bus write instruction be executed with no other operationsnstructions executing until the first operation/instruction completes with or without a fault. Imprecise operations are operations/instruction which may be queued, partially performed, or execution simultaneously with other instructions regardless of faults or bus write operations. By allowing the logical address to determine whether the bus write operation is precise or imprecise, a large amount of system flexibility is achieved.
摘要:
A processor (10) has two modes of operation. One mode of operation is a normal mode of operation wherein the processor (10) accesses user address space or supervisor address space to perform a predetermined function. The other mode of operation is referred to as a debug, test, or emulator mode of operation and is entered via an exception/interrupt. The debug mode is an alternate operational mode of the processor (10) which has a unique debug address space which executes instructions from the normal instruction set of the processor (10). Furthermore, the debug mode of operation does not adversely affect the state of the normal mode of operation while executing debug, test, and emulation commands at normal processor speed. The debug mode is totally non-destructive and non-obtrusive to the "suspended" normal mode of operation. While in debug mode, the existing processor pipelines, bus interface, etc. are utilized.
摘要:
In an information handling system in which a cyclic code is utilized to both detect and correct errors and a cyclical redundancy code is used for supplementary detection of errors, the cyclical redundancy code (CRC) polynomial is chosen to be a factor of the generator polynomial, g(x), of the error detection and correction (EDAC) code. In this way, the same check bits in the code word used for error detection and correction may be further utilized for a CRC check to supplement the error detection capabilities of the system. The risk of miscorrection of data is reduced to de minimus levels by the partitioning of data and count fields in the course of the development of the error detection and correction codes. Practice of the teachings herein disclosed provides a more efficient error detection and correction system with greatly reduced risk of miscorrection. An embodiment of the invention is disclosed following the methodology taught herein.
摘要:
Methods and systems are disclosed for on-the-fly decryption within an integrated circuit that adds zero additional cycles of latency within the overall decryption system performance. A decryption system within a processing system integrated circuit generates an encrypted counter value using an address while encrypted code associated with an encrypted software image is being obtained from an external memory using the address. The decryption system then uses the encrypted counter value to decrypt the encrypted code and to output decrypted code that can be further processed. A secret key and an encryption engine can be used to generate the encrypted counter value, and an exclusive-OR logic block can process the encrypted counter value and the encrypted code to generate the decrypted code. By pre-generating the encrypted counter value, additional cycle latency is avoided. Other similar data independent encryption/decryption techniques can also be used such as output feedback encryption/decryption modes.
摘要:
A system and method are disclosed for determining whether to allow or deny an access request based upon one or more descriptors at a local memory protection unit and based upon one or more descriptors a system memory protection unit. When multiple descriptors of a memory protection unit apply to a particular request, the least-restrictive descriptor will be selected. System access information is stored at a cache of a local core in response to a cache line being filled. The cached system access information is merged with local access information, wherein the most-restrictive access is selected.
摘要:
A system (10) implements user programmable addressing modes in response to control information contained within an input address. Encoded control information stored in a plurality of user programmed address permutation control registers (70-72) is used to determine what bit values are used to replace predetermined bits of the input address to selectively create a corresponding permutated address. Since no modification to a processor's pipeline is required, various permutation addressing modes may be user-defined and implemented using either a general-purpose processor or a specialized processor.
摘要:
A multiprocessing system includes a cache coherency technique that ensures that every access to a line of data is the most up-to-date copy of that line without storing cache coherency status bits in a global memory and any reference thereto. An operand cache includes a first directory which directly, on a one-to-one basis maps a range of physical address bits into a first section of the operand cache storage. An associative directory multiply maps physical addresses outside of the range into a second section of the operand cache storage section. All stack frames of user programs to be executed on the time-shared basis are stored in the first section, so cache misses due to stack operations are avoided. An instruction cache haivng various categories of instructions stores a group of status bits identifying the instruction category with each instruction. When a context switch occures, only instructions of the category least likely to be used in the near future are cleared decreasing delays due to clearing of the instruction cache as a result of context switches. A page-mapped I/O cache structure interfaces by a large number of I/O channels which regard a single I/O cache as an exclusive buffer. System operating delays due to maintaining cache coherency, operand cache misses, instruction cache misses, I/O cache misses, and maintaining a cache coherency are substantially reduced.
摘要:
Methods and systems are disclosed for key management for on-the-fly hardware decryption within an integrated circuit. Encrypted information is received from an external memory and stored in an input buffer within the integrated circuit. The encrypted information includes one or more encrypted key blobs. The encrypted key blobs include one or more secret keys for encrypted code associated with one or more encrypted software images stored within the external memory. A key-encryption key (KEK) code for the encrypted key blobs is received from an internal data storage medium within the integrated circuit, and the KEK code is used to generate one or more key-encryption keys (KEKs). A decryption system then decrypts the encrypted key blobs using the KEKs to obtain the secret keys, and the decryption system decrypts the encrypted code using the secret keys. The resulting decrypted software code is then available for further processing.