摘要:
A method for detecting computational errors in a digital processor executing a program. The program is divided into a plurality of computation sections, and two functionally identical code segments, respectively comprising a primary segment and a secondary segment, are generated for one of the computation sections. The primary segment is executed, after which a temporal diversity timer is started. The secondary segment is then executed upon expiration of the timer. The respective results of execution of the primary segment and the secondary segment are compared after completion of execution of the secondary segment, and an error indication is provided if the respective results are not identical.
摘要:
A method for detecting computational errors in a digital processor executing a program. Initially, the program is divided into computation segments, and source code for at least one of the segments is compiled to generate two redundant code sections. Comparison code is generated for comparing results produced by execution of the two code sections. Each of the code sections is then executed in a different computational domain to generate respective results. The results of the computation are executed to alter further flow of the program only if the respective results are identical.
摘要:
Disclosed are systems and methods for increasing the difficulty of data sniffing. In one embodiment, a system and a method pertain to presenting information to a user via an output device, the information corresponding to characters available for identification as part of sensitive information to be entered by the user, receiving from the user via an input device identification of information other than the explicit sensitive information, the received information being indicative of the sensitive information, such that the sensitive information cannot be captured directly from the user identification through data sniffing, and interpreting the identified information to determine the sensitive information.
摘要:
A symmetric multi-processing system for processing exclusive read requests. The system includes a plurality of cell boards, each of which further includes at least one CPU and cache memory, with all of the cell boards being connected to at least one crossbar switch. The read-latency reducing system includes write-through cache memory on each of the cell boards, a modified line list on each crossbar switch having a list of cache lines that have been modified in the cache memory of each of the cell boards, and a cache coherency directory on each crossbar switch for recording the address, the status, and the location of each of the cache lines in the system. The modified line list is accessed to obtain a copy of a requested cache line for each of the exclusive read requests from the cell boards not containing the requested cache line.
摘要:
Systems, methodologies, media, and other embodiments associated with RAID-enabling caches in multi-cell systems are described. One exemplary system embodiment includes a cell(s) configured with a RAID-enabling cache(s) that maps a visible address space to implement RAID memory. The example system may also include an additional cell(s) configured to facilitate implementing RAID memory by providing, for example, a mirroring location, a parity cell, and so on.
摘要:
A multi-processor computer architecture reduces processing time and bus bandwidth during snoop processing. The architecture includes processors and local caches. Each local cache corresponds to one of the processors. The architecture includes one or more virtual busses coupled to the local caches and the processors, and one or more intermediary caches, where at least one intermediary cache is coupled to each virtual bus. Each intermediary cache includes a memory array and means for ensuring the intermediary cache is inclusive of associated local caches. The architecture further includes a main memory having a plurality of memory lines accessible by the processors.
摘要:
An I/O control system for controlling I/O devices in a multi-partition computer system. The I/O control system includes an IOP partition containing an I/O processor cell with at least one CPU executing a control program, and a plurality of standard partitions, each including a cell comprising at least one CPU executing a control program, coupled, via shared memory, to the I/O processor cell. One or more of the standard partitions becomes an enrolled partition, in communication with the I/O processor cell, in response to requesting a connection to the IOP cell. After a partition is enrolled with the I/O processor cell, I/O requests directed to the I/O devices from the enrolled partition are distributed over shared I/O resources controlled by the I/O processor cell.
摘要:
Systems, methodologies, media, and other embodiments associated with (de)compressing data at a time and in a location that facilitates increasing memory transfer bandwidth by selectively controlling a burst-mode protocol used to transfer data to and/or from a memory are described. One exemplary system embodiment includes a memory controller configured to (de)compress memory, to manipulate size data associated with compressed data, and to selectively manipulate a burst-mode protocol employed in transferring compressed data to and/or from random access memory.
摘要:
A method of allocating memory operates to avoid overlapping hot spots in cache that can ordinarily cause cache thrashing. This method includes steps of determining a spacer size, reserving a spacer block of memory from a memory pool, and allocating memory at a location following the spacer block. In an alternative embodiment, the spacer size is determined randomly in a range of allowable spacer size. In other alternative embodiments, spacers are allocated based upon size of a previously allocated memory block.
摘要:
Intermediary inclusive caches (IICs) translate between some number of processors using virtual addressing and a physically addressed bus. The IICs support at least one virtual bus (upper bus) connecting the IICs to central processor units (CPUs), and at least one physical bus (lower bus) connecting the IICs to a memory controller, input/output (I/O) devices and perhaps other IICs. Whenever a CPU makes a request of memory (on the upper bus), the request is looked up in an IIC. Should the data reside in the IIC, the data is provided to the CPU from the IIC through the upper bus (except in the case of coherency filters which do not cache data). If the request misses the IIC, the request is repeated on the lower bus. When the requested data comes back from the lower bus, the data is cached in the IIC and passed up to the requesting CPU through the upper bus. Whenever a snoop request comes in from the lower bus, the snooped (requested) data is looked up in the IIC. Should the snoop miss the IIC, that is the requested data is not in the IIC, the request need not be repeated on the upper bus. In the case of the snoop hit on the IIC, the snoop may be repeated on the upper bus if a coherency protocol requires.