摘要:
A data processing system and method of communicating data in a data processing system are described. The data processing system includes a communication network to which a plurality of devices are coupled. At least one device among the plurality of devices coupled to the communication network includes mastering circuitry and snooping circuitry. According to the method, a first timing signal having a first frequency and a second timing signal having a second frequency different from the first frequency are generated. Communication transactions on the communication network are initiated utilizing the mastering circuitry, which operates in response to the first timing signal, and are monitored utilizing the snooping circuitry, which operates in response to the second timing signal.
摘要:
A method and system for front-end gathering of store instructions within a processor is disclosed. In accordance with the method and system of the present invention, a store queue within a data-processing system includes a front-end queue and a back-end queue. Multiple entries are provided in the back-end queue, and each entry includes an address field, a byte-count field, and a data field. A determination is first made as to whether or not a data field of a first entry of the front-end queue is filled completely. In response to a determination that the data field of the first entry of the front-end queue is not filled completely, another determination is made as to whether or not an address for a store instruction in a subsequent second entry is equal to an address for the store instruction in the first entry plus a byte count in the first entry. In response to a determination that the address for the store instruction in the subsequent second entry is equal to the address for the store instruction in the first entry plus the byte count in the first entry, the store instruction in the subsequent second entry is collapsed into the store instruction in the first entry.
摘要:
A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requesters that share the resource. Each of the requesters is associated with a priority weight that indicates a probability that the associated requester will be assigned a highest current priority. Each requester is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requesters. In response to the current priorities of the requesters, a request for access to the resource is granted. In one embodiment, a requester corresponding to a granted request is signaled that its request has been granted, and a requester corresponding to a rejected request is signaled that its request was not granted.
摘要:
A method and system for allocating data among cache memories within a symmetric multiprocessor data-processing system are disclosed. The symmetric multiprocessor data-processing system includes a system memory and multiple processing units, wherein each of the processing units has a cache memory. The system memory is divided into a number of segments, wherein the number of segments is equal to the total number of cache memories. Each of these segments is represented by one of the cache memories such that a cache memory is responsible to cache data from its associated segment within the system memory.
摘要:
A data processing system includes a processor, a system memory, one or more input/output channel controllers (IOCC), and a system bus connecting the processor, the memory and the IOCCs together for communicating instructions, address and data between the various elements of a system. The IOCC includes a paged cache storage having a number of lines wherein each line of the page may be, for example, 32 bytes. Each page in the cache also has several attribute bits for that page including the so called WIM and attribute bits. The W bit is for controlling write through operations; the I bit controls cache inhibit; and the M bit controls memory coherency. Since the IOCC is unaware of these page table attribute bits for the cache lines being DMAed to system memory, IOCC must maintain memory consistency and cache coherency without sacrificing performance. For DMA write data to system memory, new cache attributes called global, cachable and demand based write through are created. Individual writes within a cache line are gathered by the IOCC and only written to system memory when the I/O bus master accesses a different cache line or relinquishes the I/O bus.
摘要:
A system and method are provided that use a determination of bad data parity and the state of an error signal (Derr.sub.--) as a functional signal indicating a specific type of error in a particular system component. If the Derr.sub.-- signal is active, the parity error recognized by the CPU was caused by a correctable condition in a data providing device. In this instance, the processor will read the corrected data from a buffer without reissuing a fetch request. When the CPU finds a parity error, but Derr.sub.-- is not active a more serious fault condition is identified (bus error or uncorrectable multibit error) requiring a machine level interrupt, or the like. And, when no parity is found by the CPU and Derr.sub.-- is not active, then the data is known to be valid and the parity/ECC latency is eliminated, thereby saving processing cycle time.
摘要:
Disclosed are a method, a system and a computer program product for automatically allocating and de-allocating resources for jobs executed or processed by one or more supercomputer systems. In one or more embodiments, a supercomputing system can process multiple jobs with respective supercomputing resources. A global resource manager can automatically allocate additional resources to a first job and de-allocate resources from a second job. In one or more embodiments, the global resource manager can provide the de-allocated resources to the first job as additional supercomputing resources. In one or more embodiments, the first job can use the additional supercomputing resources to perform data analysis at a higher resolution, and the additional resources can compensate for an amount of time the higher resolution analysis would take using originally allocated supercomputing resources.
摘要:
A technique for operating a high performance computing (HPC) cluster includes monitoring workloads of multiple processors included in the HPC cluster. The HPC cluster includes multiple nodes that each include two or more of the multiple processors. One or more threads assigned to one or more of the multiple processors are moved to a different one of the multiple processors based on the workloads of the multiple processors.
摘要:
A bulk power assembly includes a bulk power distribution (BPD) subassembly and a bulk power controller and hub (BPCH) subassembly coupled to the BPD subassembly. The BPD assembly is configured to provide bulk DC power from both AC input power and DC input power. The BPD subassembly is configured to distribute the DC bulk power. The BPCH subassembly is configured to monitor and control the BPD assembly.
摘要:
A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.