摘要:
A method and circuit for implementing variable length packets to embed extra control information in an interconnect system, and a design structure on which the subject circuit resides are provided. Packets are defined to include an End-to-End (ETE) Flow Unit within packet (Flit) count field in the packet header. The packet header also includes its own CRC field. When a nonzero ETE flit count field is received in an incoming packet from an incoming link, the specified number of embedded ETE flits is removed from the packet and is used the same as if the control information arrived in its own packet.
摘要:
A method and circuit for implementing ordered and reliable transfer of packets while spraying packets over multiple links, and a design structure on which the subject circuit resides are provided. Each source interconnect chip maintains a spray mask including multiple available links for each destination chip for spraying packets across multiple links of a local rack interconnect system. Each packet is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The destination interconnect chip uses the ETE sequence numbers to reorder the received sprayed packets into the correct order before sending the packets to the destination device.
摘要:
A method and circuit for implementing variable length packets to embed extra control information in an interconnect system, and a design structure on which the subject circuit resides are provided. Packets are defined to include an End-to-End (ETE) Flow Unit within packet (Flit) count field in the packet header. The packet header also includes its own CRC field. When a nonzero ETE flit count field is received in an incoming packet from an incoming link, the specified number of embedded ETE flits is removed from the packet and is used the same as if the control information arrived in its own packet.
摘要:
A method and circuit for implementing ordered and reliable transfer of packets while spraying packets over multiple links, and a design structure on which the subject circuit resides are provided. Each source interconnect chip maintains a spray mask including multiple available links for each destination chip for spraying packets across multiple links of a local rack interconnect system. Each packet is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The destination interconnect chip uses the ETE sequence numbers to reorder the received sprayed packets into the correct order before sending the packets to the destination device.
摘要:
A method and circuit for implementing multiple active paths between source and destination devices in an interconnect system while removing ghost packets, and a design structure on which the subject circuit resides are provided. Each packet includes a generation ID and is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The packets are transmitted from a source interconnect chip source to a destination interconnect chip on the multiple active paths. The generation ID of a received packet is compared with a current generation ID at a destination interconnect chip to validate packet acceptance. The destination interconnect chip uses the ETE sequence numbers to reorder the accepted received packets into the correct order before sending the packets to the destination device.
摘要:
A method and circuit for implementing enhanced link bandwidth for a headless interconnect chip in a local rack interconnect system, and a design structure on which the subject circuit resides are provided. The headless interconnect chip includes a cut through switch and a store and forward switch. A packet is received from an incoming link to be transmitted on an outgoing link on the headless interconnect chip. Both the cut through switch and the store and forward switch are selectively used for moving packets received from the incoming link to the outgoing link on the headless interconnect chip.
摘要:
A method and circuit for implementing multiple active paths between source and destination devices in an interconnect system while removing ghost packets, and a design structure on which the subject circuit resides are provided. Each packet includes a generation ID and is assigned an End-to-End (ETE) sequence number in the source interconnect chip that represents the packet position in an ordered packet stream from the source device. The packets are transmitted from a source interconnect chip source to a destination interconnect chip on the multiple active paths. The generation ID of a received packet is compared with a current generation ID at a destination interconnect chip to validate packet acceptance. The destination interconnect chip uses the ETE sequence numbers to reorder the accepted received packets into the correct order before sending the packets to the destination device.
摘要:
A method and circuit for implementing enhanced link bandwidth for a headless interconnect chip in a local rack interconnect system, and a design structure on which the subject circuit resides are provided. The headless interconnect chip includes a cut through switch and a store and forward switch. A packet is received from an incoming link to be transmitted on an outgoing link on the headless interconnect chip. Both the cut through switch and the store and forward switch are selectively used for moving packets received from the incoming link to the outgoing link on the headless interconnect chip.
摘要:
In a shared memory architecture, early coherency indication is used to notify a communications interface, prior to the data for a memory request is returned, and prior to updating a coherency directory in response to the memory request, that the return data can be used by the communications interface when it is received thereby from a source of the return data. By doing so, the communications interface can often begin forwarding the return data over its associated communication link with little or no latency once the data is retrieved from its source. In addition, the communications interface is often no longer required to wait for updating of the coherency directory to complete prior to forwarding the return data over the communication link. As such, the overall latency for handling the memory request is typically reduced.
摘要:
A data processing system, circuit arrangement, integrated circuit device, program product, and method utilize a data buffer with a priority-based data storage capability to handle incoming data from a plurality of available data sources. With such a capability, different relative priority levels are assigned to data associated with different data sources. Such priority levels are then used by control logic coupled to the buffer to control whether or not incoming data is stored (or optionally discarded) in the buffer. In particular, the relative priority of incoming data is compared with that associated with data currently stored in the buffer, with the incoming data being stored in the buffer only when its relative priority exceeds that of the currently-stored data.