Abstract:
In one embodiment, the present invention includes a method for maintaining data in a first level cache non-inclusively with data in a second level cache coupled to the first level cache. At the same time, at least a portion of directory information associated with the data in the first level cache may be maintained inclusively with a directory portion of the second level cache. Other embodiments are described and claimed.
Abstract:
Technologies for physical programming include a model compute system to determine one or more physical blocks assembled in a constructed model. The model compute system determines rules associated with the one or more physical blocks in which at least one rule defines a behavior of the constructed model and determines a program stack for execution by the model compute system based on the rules associated with the one or more physical blocks.
Abstract:
A route for establishing a wireless connection between a wireless device and a node may be selected based on an estimated duration of the route. The route duration may be estimated based on information related to the expected movement of nodes included in the route.
Abstract:
A social network may be established between mobile nodes using a wireless connection. Establishing the social network may be based on an estimated time duration of the wireless connection. In one or more embodiments, establishing the social network may also be based on a similarity of interests among of users of the nodes.
Abstract:
An apparatus is described that contains a processing core comprising a CPU core and at least one accelerator coupled to the CPU core. The CPU core comprises a pipeline having a translation look aside buffer. The CPU core comprising logic circuitry to set a lock bit in attribute data of an entry within the translation look-aside buffer entry to lock a page of memory reserved for the accelerator.
Abstract:
Methods and apparatus to schedule applications in heterogeneous multiprocessor computing platforms are described. In one embodiment, information regarding performance (e.g., execution performance and/or power consumption performance) of a plurality of processor cores of a processor is stored (and tracked) in counters and/or tables. Logic in the processor determines which processor core should execute an application based on the stored information. Other embodiments are also claimed and disclosed.
Abstract:
Embodiments of systems, apparatuses, and methods for energy-efficient operation of a device are described. In some embodiments, a cache performance indicator of a cache is monitored, and a set of one or more cache performance parameters based on the cache performance indicator is determined. The cache is dynamically resized to an optimal cache size based on a comparison of the cache performance parameters to their energy-efficient targets to reduce power consumption.
Abstract:
Methods and apparatus relating to geographic content addressing are described. In an embodiment, a server (such as a content server or a content delivery server) transmits content to one or more devices at a first location based on location information corresponding to the first location of the one or more devices. The location information corresponding to the first location of the one or more devices is registered prior to transmission of the content to the one or more devices at the first location (e.g., at a registry server). Other embodiments are also claimed and described.
Abstract:
Methods and apparatus for control of On-Die System Fabric (OSF) blocks are described. In one embodiment, a shadow address corresponding to a physical address may be stored in response to a user-level request and a logic circuitry (e.g., present in an OSF) may determine the physical address from the shadow address. Other embodiments are also disclosed.