Abstract:
Apparatus and computing systems associated with cache sharing based thread control are described. One embodiment includes a memory to store a thread control instruction and a processor to execute the thread control instruction. The processor is coupled to the memory. The processor includes a first unit to dynamically determine a cache sharing behavior between threads in a multi-threaded computing system and a second unit to dynamically control the composition of a set of threads in the multi-threaded computing system. The composition of the set of threads is based, at least in part, on thread affinity as exhibited by cache-sharing behavior. The thread control instruction controls the operation of the first unit and the second unit.
Abstract:
In some embodiments, a method, apparatus and system for an application-aware cache push agent. In this regard, a cache push agent is introduced to push contents of memory into a cache of a processor in response to a memory read by the processor of associated contents. Other embodiments are described and claimed.
Abstract:
Methods and apparatus to process cache allocation requests are disclosed. In an example method, a priority level is assigned to a cache allocation request. Based on the priority level, an allocation probability associated with the cache allocation request is identified. Based on the allocation probability, the cache allocation request is identified with either an allocate condition and a bypass condition.
Abstract:
An instruction pipeline implemented on a semiconductor chip is described. The semiconductor chip includes an execution unit having the following to execute an interrupt handling instruction. Storage circuitry to hold different sets of micro-ops where each set of micro-ops is to handle a different interrupt. First logic circuitry to execute a set of said sets of micro-ops to handle an interrupt that said set is designed for. Second logic circuitry to return program flow to an invoking program upon said first logic circuitry having handled said interrupt.
Abstract:
In one embodiment, the present invention includes a method for receiving an interrupt from an accelerator, sending a resume signal directly to a small core responsive to the interrupt and providing a subset of an execution state of the large core to the first small core, and determining whether the small core can handle a request associated with the interrupt, and performing an operation corresponding to the request in the small core if the determination is in the affirmative, and otherwise providing the large core execution state and the resume signal to the large core. Other embodiments are described and claimed.
Abstract:
Methods and systems of recognizing images may include an apparatus having a hardware module with logic to, for a plurality of vectors in an image, determine a first intermediate computation based on even pixels of an image vector, and determine a second intermediate computation based on odd pixels of an image vector. The logic can also combine the first and second intermediate computations into a Hessian matrix computation.
Abstract:
A cutting tool includes a body having a forward end and a rearward end. The forward end includes an insert-receiving pocket with a threaded hole having a center axis. The cutting tool further includes a cutting insert with a countersunk bore with a center axis. The cutting tool includes an error proofing feature for preventing the cutting insert to be properly mounted in an insert-receiving pocket when an offset distance between the center axis of the threaded hole of the insert-receiving pocket and the center axis of the countersunk bore of the cutting insert is greater than a predetermined percentage of the outer diameter of the threaded screw.
Abstract:
A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of large physical processor cores; a set of small physical processor cores having relatively lower performance processing capabilities and relatively lower power usage relative to the large physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of large physical processor cores to software through a corresponding set of virtual cores and to hide the set of small physical processor core from the software.
Abstract:
Systems and methods may provide for capturing a user input by emulating a touch screen mechanism. In one example, the method may include identifying a point of interest on a front facing display of the device based on gaze information associated with a user of the device, identifying a hand action based on gesture information associated with the user of the device, and initiating a device action with respect to the front facing display based on the point of interest and the hand action.
Abstract:
An ad hoc network may be established between vehicles using a wireless connection. The wireless network may be used for sending and receiving information about road conditions, such as average speed, a location and configuration of a road obstruction, images of an accident scene, and a traffic flow plan. The wireless network may also be used for communicating with emergency response vehicles in order to enable faster and more effective responses to accidents.