Abstract:
A processor is described comprising: instruction failure logic to perform a plurality of operations in response to a detected instruction execution failure, the instruction failure logic to be used for instructions which have complex failure modes and which are expected to have a failure frequency above a threshold, wherein the operations include: detecting an instruction execution failure and determining a reason for the failure; storing failure data in a destination register to indicate the failure and to specify details associated with the failure; and allowing application program code to read the failure data and responsively take one or more actions responsive to the failure, wherein the instruction failure logic performs its operations without invocation of an exception handler or switching to a low level domain on a system which employs hierarchical protection domains.
Abstract:
A processor is described comprising: instruction failure logic to perform a plurality of operations in response to a detected instruction execution failure, the instruction failure logic to be used for instructions which have complex failure modes and which are expected to have a failure frequency above a threshold, wherein the operations include: detecting an instruction execution failure and determining a reason for the failure; storing failure data in a destination register to indicate the failure and to specify details associated with the failure; and allowing application program code to read the failure data and responsively take one or more actions responsive to the failure, wherein the instruction failure logic performs its operations without invocation of an exception handler or switching to a low level domain on a system which employs hierarchical protection domains.
Abstract:
An apparatus and method are described for executing both latency-optimized execution logic and throughput-optimized execution logic on a processing device. For example, a processor according to one embodiment comprises: latency-optimized execution logic to execute a first type of program code; throughput-optimized execution logic to execute a second type of program code, wherein the first type of program code and the second type of program code are designed for the same instruction set architecture; logic to identify the first type of program code and the second type of program code within a process and to distribute the first type of program code for execution on the latency-optimized execution logic and the second type of program code for execution on the throughput-optimized execution logic.
Abstract:
A method and apparatus forenhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
A method and apparatus to reduce the idle link power in a platform. In one embodiment of the invention, the host and its coupled endpoint(s) in the platform each has a low power idle link state that allows disabling of the high speed link circuitry in both the host and its coupled endpoint(s). This allows the platform to reduce its idle power as both the host and its coupled endpoint(s) are able to turn off their high speed link circuitry in one embodiment of the invention.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
Embodiments of the invention provide a device having a scheduling interrupt controller, as well as systems and methods thereof. For example, a scheduling interrupt controller and method thereof is able to prioritize and schedule interrupt requests according to dynamic system needs. The interrupt controller may schedule a plurality of interrupt requests from a plurality of interrupt sources having respective interrupt latency tolerances, based on one or more timing parameters of the requests, wherein at least one of the timing parameters is responsive to an allowable timespan for processing the requests to avoid latency error. Other features are described and claimed.