摘要:
Transaction data is identified and a flit is generated to include three or more slots and a floating field to be used as an extension of any one of two or more of the slots. In another aspect, the flit is to include two or more slots, a payload, and a cyclic redundancy check (CRC) field to be encoded with a 16-bit CRC value generated based on the payload. The flit is sent over a serial data link to a device for processing, based at least in part on the three or more slots.
摘要:
A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state
摘要:
A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data.
摘要:
In one embodiment, the present invention includes a processor that can generate data packets for transmission to an agent, where the processor can generate a data packet having a command portion including a first operation code to encode a transaction type for the data packet and a second operation code to encode a processor-specific operation. This second operation code can encode many different features such as an indication that the data packet is of a smaller size than a standard packet, in order to reduce bandwidth. This operation code can also identify an operation to be performed by a destination agent coupled to the agent. Other embodiments are described and claimed.
摘要:
Techniques for dynamically adjusting point-to-point link width based, at least in part, on link conditions. When operating at full power and at the highest performance level, the full width of a point-to-point link may be utilized to transmit data between the end points. If the link is not fully utilized, a portion (e.g., one half or one quarter) of the link may remain active for communication purposes.
摘要:
Techniques to cause a point-to-point link between system components to engage in a negotiation process that may lead to the link transitioning from an active state in which data may be transmitted between system components to a low power state where data may not be transmitted. The negotiation process may occur between each pair of nodes within an electronic system that are interconnected via point-to-point link. The negotiation may ensure that there are no pending transactions or transactions that may occur within an upcoming period of time. Through this negotiation each component acknowledges and agrees to transition the link to the low power state.
摘要:
Techniques to cause a point-to-point link between system components to engage in a negotiation process that may lead to the link transitioning from an active state in which data may be transmitted between system components to a low power state where data may not be transmitted. The negotiation process may occur between each pair of nodes within an electronic system that are interconnected via point-to-point link. The negotiation may ensure that there are no pending transactions or transactions that may occur within an upcoming period of time. Through this negotiation each component acknowledges and agrees to transition the link to the low power state.
摘要:
Techniques that may utilize generic tracker structures to provide data coherency in a multi-node system that supports non-snoop read and write operations. The trackers may be organized as a two-dimensional queue structure that may be utilized to resolve conflicting read and/or write operations. Multiple queues having differing associated priorities may be utilized.
摘要:
An apparatus and method is disclosed for allowing a multiprocessor computer system with shared memory distributed among multiple nodes to appear like a single-node environment. The single-node environment is implemented with a memory map that has a unique address for every memory location in the system. Overlapping address spaces in the multinode environment are also assigned unique representative addresses that are translated to actual addresses in conformance with the multinode environment. The apparatus and method allows a wide variety of operating systems to be run on the multinode environment. Additionally, industry standard BIOS and chip sets can be used.
摘要:
A multiprocessor system that assures forward progress of local processor requests for data by preventing other nodes from accessing the data until the processor request is satisfied. In one aspect of the invention, the local processor requests data through a remote cache interconnect. The remote cache interconnect tells the local processor to retry its request for data at a later time, so that the remote cache interconnect has sufficient time to obtain the data from the system interconnect. When the remote cache interconnect receives the data from the system interconnect, a hold flag is set. Any requests from other nodes for the data are rejected while the hold flag is set. When the local processor issues a retry request, the data is delivered to the processor and the hold flag is cleared. Other nodes may then obtain control of the data.