摘要:
A NoDMA cache including a super page field. The super page field indicates when a set of pages contain protected information. The NoDMA cache is used by a computer system to deny I/O device access to protected information in system memory.
摘要:
Embodiments herein relate to a die of a multi-die package, wherein the die is coupled with another die via a die-to-die (D2D) interconnect link. The die may transmit a data signal to the other die via a data lane of the D2D interconnect link. The die may further transmit, concurrently with the data signal, a valid signal to the other die via a valid lane of the D2D interconnect link. The valid signal may change logical state at least once during the transmission of the data signal. Other embodiments may be described and claimed.
摘要:
A first flit is generated according to a first flit format, where a first number of error detection codes are to be provided for an amount of data to be sent in the first flit, and the first flit is to be sent on a link by the transmitter while the link operates with a first link width. The link transitions from a first link width to a second link width, where the second link width is narrower than the first link width. A second flit is generated according to a second flit format based on the transition to the second link width, where the second flit is to be sent while the link operates at the second link width, and the second flit format defines that a second, higher number of error detection codes are to be provided for the same amount of data.
摘要:
In one embodiment, an apparatus includes: a first link layer circuit to perform link layer functionality for a first communication protocol; and a logical physical (logPHY) circuit coupled to the first link layer circuit via a logical PHY interface (LPIF) link, the logPHY circuit to communicate with the first link layer circuit in a flit mode in which the first information is communicated in a fixed width size and to communicate with another link layer circuit in a non-flit mode. Other embodiments are described and claimed.
摘要:
A system and method comprising, in response to a first component and a link partner of the first component, undergoing equalization, the first component is to communicate a first set of data to the link partner component. The first component may comprise at least one receiver to receive a first set of equalization data. The first component may further comprise coefficient storage coupled to the receiver to store the equalization data. In addition, coefficient logic coupled to the coefficient storage to generate a first set of coefficients based on the first set of equalization data. The first component is to send the first set of coefficients to the link partner component.
摘要:
Methods, apparatus, and systems, for transporting data units comprising multiple pieces of transaction data over high-speed interconnects. A flow control unit, called a KTI (Keizer Technology Interface) Flit, is implemented in a coherent multi-layer protocol supporting coherent memory transactions. The KTI Flit has a basic format that supports use of configurable fields to implement KTI Flits with specific formats that may be used for corresponding transactions. In one aspect, the KTI Flit may be formatted as multiple slots used to support transfer of multiple respective pieces of transaction data in a single Flit. The KTI Flit can also be configured to support various types of transactions and multiple KTI Flits may be combined into packets to support transfer of data such as cache line transfers.
摘要:
An apparatus for pooling memory resources across multiple nodes is described herein. The apparatus includes a shared memory controller, wherein each node of the multiple nodes is connected to the shared memory controller. The apparatus also includes a pool of memory connected to the shared memory controller, wherein a portion of the pool of memory is allocated to each node of the multiple nodes.
摘要:
A method of staggering lanes in a peripheral component interconnect express (PCI-Express) port is described herein. The method includes initiating the port to enter or exit an electrical idle state. The method also includes forwarding a token to a predetermined lane of the port. Additionally, the method includes turning the predetermined lane ON or OFF by indication to an analog circuit interface. The method also includes forwarding the token to a neighboring lane when a staggering interval timer elapses.
摘要:
Methods and apparatus relating to fast deskew when exiting a low-power partial-width high speed link state are described. In one embodiment, an exit flit on active lanes and/or a wake signal/sequence on idle lanes may be transmitted at a first point in time to cause one or more idle lanes of a link to enter an active state. At a second point in time (following or otherwise subsequent to the first point in time), training sequences are transmitted over the one or more idle lanes of the link. And, the one or more idle lanes are deskewed in response to the training sequences and prior to a third point in time (following or otherwise subsequent to the second point in time). Other embodiments are also disclosed and claimed.
摘要:
Method, apparatus, and systems employing dictionary-based high-bandwidth lossless compression. A pair of dictionaries having entries that are synchronized and encoded to support compression and decompression operations are implemented via logic at a compressor and decompressor. The compressor/decompressor logic operatives in a cooperative manner, including implementing the same dictionary update schemes, resulting in the data in the respective dictionaries being synchronized. The dictionaries are also configured with replaceable entries, and replacement policies are implemented based on matching bytes of data within sets of data being transferred over the link. Various schemes are disclosed for entry replacement, as well as a delayed dictionary update technique. The techniques support line-speed compression and decompression using parallel operations resulting in substantially no latency overhead.