摘要:
In one embodiment, the present invention includes a method for associating a first priority indicator with first data stored in a first entry of a cache memory to indicate a priority level of the first data, and updating a count value associated with the first priority indicator. The count value may then be used in determining an appropriate cache line for eviction. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a page fault handler to create page table entries and TLB entries in response to a page fault, the page fault handler to determine if a page fault resulted from a stack access, to create a superpage table entry if the page fault did result from a stack access, and to create a TLB entry for the superpage. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for handling a registration message received from a host processor, where the registration message delegates a poll operation with respect to a device from the host processor to another component. Information from the message may be stored in a poll table, and the component may send a read request to poll the device and report a result of the poll to the host processor based on a state of the device. Other embodiments are described and claimed.
摘要:
Methods and apparatus for control of On-Die System Fabric (OSF) blocks are described. In one embodiment, a shadow address corresponding to a physical address may be stored in response to a user-level request and a logic circuitry (e.g., present in an OSF) may determine the physical address from the shadow address. Other embodiments are also disclosed.
摘要:
In one embodiment, the present invention includes a method for incrementing a counter value associated with a cache line if the cache line is inserted into a first level cache, and storing the cache line into a second level cache coupled to the first level cache or a third level cache coupled to the second level cache based on the counter value, after eviction from the first level cache. Other embodiments are described and claimed.
摘要:
A method and apparatus for throttling power and/or performance of processing elements based on a priority of software entities is herein described. Priority aware power management logic receives priority levels of software entities and modifies operating points of processing elements associated with the software entities accordingly. Therefore, in a power savings mode, processing elements executing low priority applications/tasks are reduced to a lower operating point, i.e. lower voltage, lower frequency, throttled instruction issue, throttled memory accesses, and/or less access to shared resources. In addition, utilization logic potentially trackes utilization of a resource per priority level, which allows the power manager to determine operating points based on the effect of each priority level on each other from the perspective of the resources themselves. Moreover, a software entity itself may assign operating points, which the power manager enforces.
摘要:
In one embodiment, the present invention includes a translation lookaside buffer (TLB) having storage locations each including a priority indicator field to store a priority level associated with an agent that requested storage of the data in the TLB, and an identifier field to store an identifier of the agent, where the TLB is apportioned according to a plurality of priority levels. Other embodiments are described and claimed.
摘要:
Embodiments of apparatuses, methods, and systems for sharing information between guests in a virtual machine environment are disclosed. In one embodiment, an apparatus includes virtual machine control logic, an execution unit, and a memory management unit. The virtual machine control logic is to transfer control of the apparatus among a host and its guests. The execution unit is to execute an instruction to copy information from a virtual memory address in one guest's virtual address space to a virtual memory address in another guest's virtual address space. The memory management unit is to translate the virtual memory addresses to physical memory addresses.
摘要:
Methods and apparatus to pin a lock in a shared cache are described. In one embodiment, a memory access request is used to pin a lock of one or more cache lines in a shared cache that correspond to the memory access request.
摘要:
In one embodiment, the present invention includes a method of analyzing an extensible markup language (XML) file, generating structural information for the XML file, and incorporating the structural information into the XML file. The structural information may correspond to a hierarchy of the file and may further include size information corresponding to elements of the file. In such manner, the structural information may be transmitted with the XML file and used to aid a receiver of the file in parsing. Other embodiments are described and claimed.