Abstract:
Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
Abstract:
Systems, apparatuses, and methods for efficiently selecting compressors for data compression are described. In various embodiments, a computing system includes at least one processor and multiple codecs such as one or more hardware codecs and one or more software codecs executable by the processor. The computing system receives a workload and processes instructions, commands and routines corresponding to the workload. One or more of the tasks in the workload are data compression tasks. Current condition(s) are determined during the processing of the workload by the computing system. Conditions are determined to be satisfied based on comparing current selected characteristics to respective thresholds. In one example, when the compressor selector determines a difference between a target compression ratio and an expected compression ratio of the first codec exceeds a threshold, the compressor selector switches from hardware codecs to software codecs.
Abstract:
A method and apparatus of a device that manages a thermal profile of a device by selectively throttling central processing unit operations of the device is described. The device monitors the thermal profile of the device, where the device executes a plurality of tasks that utilizes a central processing unit of the device. In addition, the plurality of tasks includes a high QoS task and a low QoS process. If the thermal profile of the device exceeds a thermal threshold, the device increases a first CPU throttling for the low QoS task and maintains a second CPU throttling for the high QoS task. The device further executes the low QoS task using the first CPU utilization with the first processing core of the CPU by selectively forcing an idle of the low QoS task during an execution window. In addition, the device executes the high QoS task using the second CPU throttling with a second processing core of the CPU.
Abstract:
A data processing system includes, in one embodiment, at least a first processor and a second processor and an interrupt controller, and the system provides a deferred inter-processor interrupt (IPI) that can be used to wake up the second processor from a low power sleep state. The deferred IPI is, in one embodiment, delayed by a timer in the interrupt controller, and the deferred IPI can be cancelled by the first processor if the first processor becomes available to execute a thread that was made runnable by an interrupt which triggered the deferred IPI.
Abstract:
A method and apparatus of a device that manages system performance by controlling power state based on information related to I/O operations is described. The device collects historical I/O information. The historical I/O information may include the number of I/O operations over a sample period of time and the inter-arrival time between I/O operations. The device further receives information related to a current I/O operation. The information of the current I/O operation may include direction, size, quality of service, and media type of the I/O operation. The device determines a power state based on the historical I/O information and the information relative to the current I/O operation to reduce power consumption while improving system efficiency and maintaining an acceptable level of system performance. The device further applies the determined power state. Other embodiments are also described and claimed.
Abstract:
Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
Abstract:
Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
Abstract:
A method and apparatus of a device that manages a thermal profile of a device by selectively throttling central processing unit operations of the device is described. The device manages a thermal profile of the device by adjusting a throttling a central processing unit execution of a historically high energy consuming task. In this embodiment, the device monitors thermal level of the thermal profile of the device, the device is executing a plurality of tasks that utilize a plurality of processing cores of the device. If the thermal level of the device exceeds a thermal threshold, the device identifies one of the plurality of tasks as a historically high energy consuming task, and throttles this historically high energy consuming task by setting a force idle execution time for the historically high energy consuming task. The device further executes the plurality of tasks.
Abstract:
A method and apparatus of a device that manages virtual memory for a graphics processing unit is described. In an exemplary embodiment, the device manages a graphics processing unit working set of pages. In this embodiment, the device determines the set of pages of the device to be analyzed, where the device includes a central processing unit and the graphics processing unit. The device additionally classifies the set of pages based on a graphics processing unit activity associated with the set of pages and evicts a page of the set of pages based on the classifying.
Abstract:
A method and apparatus of a device that coalesces the execution of several timers by scheduling the timers using a scheduling window is described. The device determines a scheduling window for each of several timers. The device selects a coalesced execution time that is within the scheduling window of the timers. The device coalesces the execution of the timers by scheduling the timers to execute at the coalesced execution time. The device can further coalesce multiple timers by opportunistic execution of the timers. In response to a detection of an opportunistic execution trigger event, the device receives multiple timers. The device selects a subset of the timers to execute based on an initial execution time and a latency time for each of the timers. The device schedules each of the subset of timers to execute during or before the opportunistic execution trigger event.