Abstract:
A method of controlling a cache is disclosed. The method comprises receiving a request to allocate a portion of memory to store data. The method also comprises directly mapping a portion of memory to an assigned contiguous portion of the cache memory when the request to allocate a portion of memory to store the data includes a cache residency request that the data continuously resides in cache memory. The method also comprises mapping the portion of memory to the cache memory using associative mapping when the request to allocate a portion of memory to store the data does not include a cache residency request that data continuously resides in the cache memory.
Abstract:
The described embodiments include a computing device with a plurality of clients and a shared resource for processing job items. During operation, a given client of the plurality of clients stores first job items in a queue for the given client. When the queue for the given client meets one or more conditions, the given client notifies one or more other clients that the given client is to process job items using the shared resource. The given client then processes the first job items from the queue using the shared resource. Based on being notified, at least one other client that has second job items to be processed using the shared resource, processes the second job items using the shared resource. The given client can transition the shared resource between power states to enable the processing of job items.
Abstract:
Systems, apparatuses, and methods for implementing dynamic control of a multi-region fabric are disclosed. A system includes at least one or more processing units, one or more memory devices, and a communication fabric coupled to the processing unit(s) and memory device(s). The system partitions the fabric into multiple regions based on different traffic types and/or periodicities of the clients connected to the regions. For example, the system partitions the fabric into a stutter region for predictable, periodic clients and a non-stutter region for unpredictable, non-periodic clients. The system power-gates the entirety of the fabric in response to detecting a low activity condition. After power-gating the entirety of the fabric, the system periodically wakes up one or more stutter regions while keeping the other non-stutter regions in power-gated mode. Each stutter region monitors stutter client(s) for activity and processes any requests before going back into power-gated mode.
Abstract:
Systems, apparatuses, and methods for aligning active and idle phases of components in a computing system are disclosed. A computing system includes components that can be forced into an active or idle phase and components that cannot be forced into an active or idle phase. The system implements schemes for aligning the active and idle phases of the components within the system. For example, a timer starts counting when a processor and memory subsystem go from a low power state to an operational state. If the amount of time spent by the processor and memory subsystems in the operational state without transitioning to the low power state exceeds a threshold, the system forces active-to-idle and idle-to-active phase transitions of components in the system in order to cause a realignment of active and idle phases of the various components within the system.
Abstract:
Systems, apparatuses, and methods for using a scoreboard to track updates to configuration state registers are disclosed. A system includes one or more processing nodes, one or more memory devices, a plurality of configuration state registers, and a communication fabric coupled to the processing unit(s) and memory device(s). The system uses a scoreboard to track updates to the configuration state registers during run-time. Prior to a node going into a power-gated state, the system stores only those configuration state registers that have changed. This reduces the amount of data written to memory on each transition into power-gated state, and increases the amount of time the node can spend in the power-gated state. Also, configuration state registers are grouped together to match the memory access granularity, and each group of configuration state registers has a corresponding scoreboard entry.
Abstract:
A data processing system includes a power manager for providing a power event depth signal in response to a power event request signal. A plurality of real-time clients is coupled to the power manager. Each real-time client includes a client buffer that has a plurality of entries for storing data. The real-time client also includes a register for storing a watermark threshold for the client buffer, as well as logic for providing an allow signal when a number of valid entries in the client buffer exceeds the watermark threshold. A power management state machine is coupled to each of the plurality of real-time clients. The power management state machine provides a power event start signal in response to all of the plurality of real-time clients providing respective allow signals.
Abstract:
The present disclosure relates to a method and system for securing a performance state change of one or more processors. A disclosed method includes detecting a request to change a current performance state of a processor to a target performance state, and adjusting an operating level tolerance range of the current performance state to include operating levels associated with a transition from the current performance state to the target performance state. A disclosed system includes an operating system module operative to transmit a request for a performance state change of at least one processing core. The system includes performance state control logic operative to change the performance state of the at least one processing core based on the request. The system further includes performance state security logic operative to adjust, in response to the request, an operating level tolerance range of the current performance state to include operating levels associated with a transition from the current performance state to the target performance state.