摘要:
According to at least one embodiment, a method of data processing in a multiprocessor data processing system includes a requesting processing unit initiating an interconnect operation including a memory access request that indicates an acceptability of a variable amount of data to service the interconnect request for data. In response to snooping the memory access request on an interconnect, a snooper selects an amount of data to supply to the requesting processing unit and transmits the selected amount of data to the requesting processing unit. The requesting processing unit receives the selected amount of data and utilizes at least some of the selected amount of data to service a processor request.
摘要:
A mechanism is provided for packet coalescing in virtual channels of a data processing system. A first processor bundles original data into a data packet to be transmitted to a destination processor, the original data comprising payload data and overhead data. The first processor transmits the data packet to a second processor along a path to the destination processor. The second processor determines if the second processor has additional payload data destined for the same destination processor. Responsive to the second processor having the additional payload data, the second processor unbundles the data packet, adds the additional payload data to the payload data, and rebundles the payload data along with the additional payload data and the overhead data into a rebundled data packet. Then the second processor transmits the rebundled data packet to at least one other processor along the path to the destination processor.
摘要:
Mechanisms for performing dynamic request routing based on broadcast depth queue information are provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide queue depth information to each of the other processor chips in the system. The queue depth information identifies a number of requests or amount of data in each of the queues of a processor chip that originated the heartbeat signal. The queue depth information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
摘要:
A mechanism is provided for collective acceleration unit tree flow control forms a logical tree (sub-network) among those processors and transfers “collective” packets on this tree. The system supports many collective trees, and each collective acceleration unit (CAU) includes resources to support a subset of the trees. Each CAU has limited buffer space, and the connection between two CAUs is not completely reliable. Therefore, to address the challenge of collective packets traversing on the tree without colliding with each other for buffer space and guaranteeing the end-to-end packet delivery, each CAU in the system effectively flow controls the packets, detects packet loss, and retransmits lost packets.
摘要:
A method of data processing in a processing unit supported by a memory hierarchy includes the processing unit performing a plurality of memory accesses to the memory hierarchy. The plurality of memory accesses includes one or more memory accesses targeting a full cache line of data. The processing unit monitors utilization of data accessed by the plurality of memory accesses, and based upon the utilization of the data, dynamically alters a memory access mode of operation so that a subsequent storage-modifying memory access targets less than a full cache line of data.
摘要:
A method, computer program product, and system are provided for dynamically routing data through the data processing system. Data is received at a first processor that is to be transmitted to a destination processor. The data that is received includes address information. A lookup is performed in routing table data structures based on the address information to identify candidate paths through which the data is routed to the destination processor. A determination is made as to whether any of the candidate paths are not able to be used to route the data to the destination processor based on a setting of at least one identifier. A path is selected from the identified candidate paths for routing of the data based on a setting of the at least one identifier. Then, the data is transmitted from the first processor along the selected path toward the destination processor.
摘要:
A data processing system is programmed to provide a method for enabling user-level one-to-all message/messaging (OTAM) broadcast within a distributed parallel computing environment in which multiple threads of a single job execute on different processing nodes across a network. The method comprises: generating one or more messages for transmission to at least one other processing node accessible via a network, where the messages are generated by/for a first thread executing at the data processing system (first processing node) and the other processing node executes one or more second threads of a same parallel job as the first thread. An OTAM broadcast is transmitting via a host fabric interface (HFI) of the data processing system as a one-to-all broadcast on the network, whereby the messages are transmitted to a cluster of processing nodes across the network that execute threads of the same parallel job as the first thread.
摘要:
A method, computer program product, and system are provided performing a Message Passing Interface (MPI) job. A first processor chip receives a set of arrival signals from a set of processor chips executing tasks of the MPI job in the data processing system. The arrival signals identify when a processor chip executes a synchronization operation for synchronizing the tasks for the MPI job. Responsive to receiving the set of arrival signals from the set of processor chips, the first processor chip identifies a fastest processor chip of the set of processor chips whose arrival signal arrived first. An operation of the fastest processor chip is modified based on the identification of the fastest processor chip. The set of processor chips comprises processor chips that are in one of a same processor book or a different processor book of the data processing system.
摘要:
A mechanism for performing dynamic request routing based on broadcast source request information is provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide source request information to each of the other processor chips in the system. The source request information identifies the number of active source requests sent by the processor chip that originated the heartbeat signal. The source request information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
摘要:
A technique for performing cache injection includes monitoring addresses on a bus in response to a cache injection instruction. Ownership of input/output data on the bus is acquired by a cache when an address on the bus (that is associated with the input/output data) corresponds to an address of a data block associated with the cache injection instruction.