Abstract:
This disclosure is directed to a dynamic data compression system. A device may request data comprising certain content from a remote resource. The remote resource may determine if any part of the content is identical or similar to content in other data and if the other data is already on the requesting device. Smart compression may then involve transmitting only the portions of the content not residing on the requesting device, which may combine the received portions of the content with the other data. In another example, a capturing device may capture at least one of an image or video. Smart compression may then involve transmitting only certain features of the image/video to the remote resource. The remote resource may determine image/video content based on the received features, and may perform an action based on the content. In addition, a determination whether to perform smart compression may be based on system/device conditions.
Abstract:
In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.
Abstract:
Apparatus and computing systems associated with cache sharing based thread control are described. One embodiment includes a memory to store a thread control instruction and a processor to execute the thread control instruction. The processor is coupled to the memory. The processor includes a first unit to dynamically determine a cache sharing behavior between threads in a multi-threaded computing system and a second unit to dynamically control the composition of a set of threads in the multi-threaded computing system. The composition of the set of threads is based, at least in part, on thread affinity as exhibited by cache-sharing behavior. The thread control instruction controls the operation of the first unit and the second unit.
Abstract:
In one embodiment, the present invention includes a method for receiving an interrupt from an accelerator, sending a resume signal directly to a small core responsive to the interrupt and providing a subset of an execution state of the large core to the first small core, and determining whether the small core can handle a request associated with the interrupt, and performing an operation corresponding to the request in the small core if the determination is in the affirmative, and otherwise providing the large core execution state and the resume signal to the large core. Other embodiments are described and claimed.
Abstract:
Certain embodiments herein are directed to managing wireless spectrum, which may include recommending or transmitting spectrum usage changes to one or more wireless devices. A spectrum management system comprising one or more computers may receive spectrum usage information associated with one or more wireless devices. The spectrum management system may generate a spectrum usage map based on the received information. Based on the spectrum usage map, a spectrum usage change is determined and transmitted to one or more wireless devices. The wireless devices may change their operation in accordance with the spectrum usage change.
Abstract:
Generally, this disclosure provides systems, devices, methods and computer readable media for context based spectrum management. A device may include a user preference determination module to determine a level-of-service preference of a user of the device, the preference associated with an application. The device may also include a user state determination module, to determine a state of the user, and a device capability determination module, to determine capabilities of the device. The device may further include an application programming interface (API) to provide the context to a cloud-based server configured to manage spectrum. The context includes the preference, the state and the capabilities. The API is further configured to receive content delivery options from the cloud-based server.
Abstract:
A method and apparatus for throttling power and/or performance of processing elements based on a priority of software entities is herein described. Priority aware power management logic receives priority levels of software entities and modifies operating points of processing elements associated with the software entities accordingly. Therefore, in a power savings mode, processing elements executing low priority applications/tasks are reduced to a lower operating point, i.e. lower voltage, lower frequency, throttled instruction issue, throttled memory accesses, and/or less access to shared resources. In addition, utilization logic potentially trackes utilization of a resource per priority level, which allows the power manager to determine operating points based on the effect of each priority level on each other from the perspective of the resources themselves. Moreover, a software entity itself may assign operating points, which the power manager enforces.
Abstract:
In one embodiment, the present invention includes a method for receiving an interrupt from an accelerator, sending a resume signal directly to a small core responsive to the interrupt and providing a subset of an execution state of the large core to the first small core, and determining whether the small core can handle a request associated with the interrupt, and performing an operation corresponding to the request in the small core if the determination is in the affirmative, and otherwise providing the large core execution state and the resume signal to the large core. Other embodiments are described and claimed.
Abstract:
Embodiments of systems, apparatuses, and methods for energy-efficient operation of a device are described. In some embodiments, a cache performance indicator of a cache is monitored, and a set of one or more cache performance parameters based on the cache performance indicator is determined. The cache is dynamically resized to an optimal cache size based on a comparison of the cache performance parameters to their energy-efficient targets to reduce power consumption.
Abstract:
Embodiments of an apparatus for controlling cache occupancy rates are presented. In one embodiment, an apparatus comprises a controller and monitor logic. The monitor logic determines a monitored occupancy rate associated with a first program class. The first controller regulates a first allocation probability corresponding to the first program class, based at least on the difference between a requested occupancy rate and the first monitored occupancy rate.