Abstract:
Technologies for a unified debug and test architecture in chiplets is disclosed. In an illustrative embodiment, several chiplets are integrated on an integrated circuit package. The chiplets are connected by a package interconnect, such as a universal chiplet interconnect express (UCIe) interconnect. Each chiplet includes several debug nodes, which are connected by an on-chiplet network. One of the chiplets, referred to as a package debug endpoint, acts as a link endpoint for an off-package link, such as a peripheral component interconnect express (PCIe) link. In use, debug messages can be sent to the package debug endpoint over a PCIe link. The debug messages can be routed within the chiplets and between chiplets, allowing for the debug functionality at each debug node to be probed using a common protocol. In this manner, chiplets from different vendors can be integrated into the same package and tested using common software.
Abstract:
Embodiments herein relate to a universal component interconnect express (UCIe) link that includes a mainband and a sideband. One or more pieces of logic may identify a data that is to be transmitted on the sideband. The logic may then identify, based on factors such as a characteristic of the data or a characteristic of the link, whether to transmit the data on the mainband. Other embodiments may be described and/or claimed.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
A system may include a graphics processing unit including a command counter. The system may also include a general-purpose processor to: in response to a detection of a timing signal, determine a count value of the command counter included in the graphics processing unit; determine a first threshold range of a plurality of threshold ranges that matches the determined count value of the command counter; select, based on the determined first threshold range, a first configuration profile of a plurality of configuration profiles for the graphics processing unit; and cause the graphics processing unit to use the selected first configuration profile. Other embodiments are described and claimed.
Abstract:
Logical memory is divided into two regions. Data in the first region is always retained. The first region of memory is designated online (or powered on) and is not offlined during standby or low power mode. The second region is the rest of the memory which can be potentially placed in non-self-refresh mode during standby by offlining the memory region. Content in the second region can be moved to the first region or can be flushed to another memory managed by the operating system. When the first region does not have enough space to accommodate data from the second region, the operating system can increase the logical size of the first region. Retaining the content of the first region by putting that region in self-refresh and saving power in the second region by not putting it in self-refresh is performed by an improved Partial Array Self Refresh scheme.
Abstract:
Embodiments of systems, apparatuses, and methods for posting interrupts to virtual processors are disclosed. In one embodiment, an apparatus includes look-up logic and posting logic. The look-up logic is to look-up an entry associated with an interrupt request to a virtual processor in a data structure. The posting logic is to post the interrupt request in a data structure specified by information in the first data structure.
Abstract:
In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.
Abstract:
Embodiments of systems, apparatuses, and methods for posting interrupts to virtual processors are disclosed. In one embodiment, an apparatus includes look-up logic and posting logic. The look-up logic is to look-up an entry associated with an interrupt request to a virtual processor in a data structure. The posting logic is to post the interrupt request in a data structure specified by information in the first data structure.
Abstract:
Embodiments of apparatuses, methods, and systems for delivering an interrupt to a virtual processor are disclosed. In one embodiment, an apparatus includes an interface to receive an interrupt request, delivery logic, and exit logic. The delivery logic is to determine, based on an attribute of the interrupt request, whether the interrupt request is to be delivered to the virtual processor. The exit logic is to transfer control to a host if the delivery logic determines that the interrupt request is not to be delivered to the virtual processor.
Abstract:
In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.