摘要:
A method for managing a cache operates in a data processing system with a system memory and a plurality of processing units (PUs). A first PU determines that one of a plurality of cache lines in a first cache of the first PU must be replaced with a first data block, and determines whether the first data block is a victim cache line from another one of the plurality of PUs. In the event the first data block is not a victim cache line from another one of the plurality of PUs, the first cache does not contain a cache line in coherency state invalid, and the first cache contains a cache line in coherency state moved, the first PU selects a cache line in coherency state moved, stores the first data block in the selected cache line and updates the coherency state of the first data block.
摘要:
A direct memory access (DMA) device includes a barrier and interrupt mechanism that allows interrupt and mailbox operations to occur in such a way that ensures correct operation, but still allows for high performance out-of-order data moves to occur whenever possible. Certain descriptors are defined to be “barrier descriptors.” When the DMA device encounters a barrier descriptor, it ensures that all of the previous descriptors complete before the barrier descriptor completes. The DMA device further ensures that any interrupt generated by a barrier descriptor will not assert until the data move associated with the barrier descriptor completes. The DMA controller only permits interrupts to be generated by barrier descriptors. The barrier descriptor concept also allows software to embed mailbox completion messages into the scatter/gather linked list of descriptors.
摘要:
The invention provides xanthine phosphoribosyl transferase polypeptides and DNA (RNA) encoding xanthine phosphoribosyl transferase polypeptides and methods for producing such polypeptides by recombinant techniques. Also provided are methods for utilizing xanthine phosphoribosyl transferase polypeptides to screen for antibacterial compounds.
摘要:
Address error detection including a method that receives write data and a write address, the write address corresponding to a location in a memory. Error correction code (ECC) bits are generated based on the received write data. The write data is transformed at a computer based on the write address and the write data, to produce transformed write data. The transforming is configured to cause an ECC to detect an address error during a read operation to the write address in response to a mismatch between either the write address or the read address and data read from the location. The transformed write data and the ECC bits are written to the location in memory.
摘要:
A method for managing data operates in a data processing system with a system memory and a plurality of processing units (PUs), each PU having a cache comprising a plurality of cache lines, each cache line having one of a plurality of coherency states, and each PU coupled to at least another one of the plurality of PUs. A first PU selects a castout cache line of a plurality of cache lines in a first cache of the first PU to be castout of the first cache. The first PU sends a request to a second PU, wherein the second PU is a neighboring PU of the first PU, and the request comprises a first address and first coherency state of the selected castout cache line. The second PU determines whether the first address matches an address of any cache line in the second PU. The second PU sends a response to the first PU based on a coherency state of each of a plurality of cache lines in the second cache and whether there is an address hit. The first PU determines whether to transmit the castout cache line to the second PU based on the response. And, in the event the first PU determines to transmit the castout cache line to the second PU, the first PU transmits the castout cache line to the second PU.
摘要:
A system and method for handling e-mail attachments in a data processing system. A client receives at least one message in a message database stored in a system memory, wherein the at least one message includes at least one attached file. The client displays a main preview of the at least one message, wherein the main preview of the at least one message includes an indicia that represents the at least one attached file. The client expands the main preview of the at least one message into a first sub-preview and a second sub-preview, wherein the first sub-preview represents the at least one message, and wherein the second sub-preview represents the at least one attached file. The client selects the second sub-preview to perform a function on the at least one attached file independent of the at least one message. The client performs the function on the at least one attached file independent of the at least one message.
摘要:
A method, and bus prefetching mechanism are provided for implementing enhanced buffer control. A computer system includes a plurality of masters and at least one slave exchanging data over a system bus and the slave prefetches read data under control of a master. The master generates a continue bus signal that indicates a new or a continued request. The master generates a prefetch bus signal that indicates an amount to prefetch including no prefetching. The master includes a mechanism for continuing a sequence of reads allowing prefetching until a request is made indicating a prefetch amount of zero.
摘要:
A circuit arrangement for bus arbitration alters the sequence in which device requests are arbitrated with respect to each other and to a previous arbitration sequence. To this end, an arbiter grants access to a first group of devices according to a predetermined sequence. The arbiter then automatically alters the sequence for a second group of devices, granting access to the bus for the second group according to the altered sequence. These features allow the order in which the arbiter sequences through the groups to be automatically varied with respect to each other, diminishing the likelihood of lockout.
摘要:
A system and method for simulating a circuit design using both an unknown Boolean state and a negative unknown Boolean state is provided. When the circuit is simulated, one or more initial simulated logic elements are initialized to the unknown Boolean state. The initialized unknown Boolean states are then fed to one or more simulated logic elements and the simulator simulates the handling of the unknown Boolean state by the simulated logic elements. Examples of simulated logic elements include gates and latches, such as flip-flops, inverters, and basic logic gates. The processing results in at least one negative unknown Boolean state. An example of when a negative unknown Boolean state would result would be when the unknown Boolean state is inverted by an inverter. The resulting negative unknown Boolean state is then fed to other simulated logic elements that generate further simulation results based on processing the negative unknown Boolean state.
摘要:
A DMA device prefetches descriptors into a descriptor prefetch buffer. The size of descriptor prefetch buffer holds an appropriate number of descriptors for a given latency environment. To support a linked list of descriptors, the DMA engine prefetches descriptors based on the assumption that they are sequential in memory and discards any descriptors that are found to violate this assumption. The DMA engine seeks to keep the descriptor prefetch buffer full by requesting multiple descriptors per transaction whenever possible. The bus engine fetches these descriptors from system memory and writes them to the prefetch buffer. The DMA engine may also use an aggressive prefetch where the bus engine requests the maximum number of descriptors that the buffer will support whenever there is any space in the descriptor prefetch buffer. The DMA device discards any remaining descriptors that cannot be stored.