摘要:
A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
摘要:
Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below Vccmin levels. Other embodiments are also described and claimed.
摘要:
A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed.
摘要:
A method and system to provide user-level multithreading are disclosed. The method according to the present techniques comprises receiving programming instructions to execute one or more shared resource threads (shreds) via an instruction set architecture (ISA). One or more instruction pointers are configured via the ISA; and the one or more shreds are executed simultaneously with a microprocessor, wherein the microprocessor includes multiple instruction sequencers.
摘要:
A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
摘要:
Multiple parallel passive threads of instructions coordinate access to shared resources using “active” and “proactive” semaphores. The active semaphores send messages to execution and/or control circuitry to cause the state of a thread to change. A thread can be placed in an inactive state by a thread scheduler in response to an unresolved dependency, which can be indicated by a semaphore. A thread state variable corresponding to the dependency is used to indicate that the thread is in inactive mode. When the dependency is resolved a message is passed to control circuitry causing the dependency variable to be cleared. In response to the cleared dependency variable the thread is placed in an active state. Execution can proceed on the threads in the active state. A proactive semaphore operates in a similar manner except that the semaphore is configured by the thread dispatcher before or after the thread is dispatched to the execution circuitry for execution.
摘要:
Methods, apparatus, and instructions for performing string comparison operations. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store a result of a comparison between each data element of a first and second operand corresponding to a first and second text string, respectively.
摘要:
A method and apparatus for including in a processor instructions for performing multiply-add operations on packed data. In one embodiment, a processor is coupled to a memory. The memory has stored therein a first packed data and a second packed data. The processor performs operations on data elements in said first packed data and said second packed data to generate a third packed data in response to receiving an instruction. At least two of the data elements in this third packed data storing the result of performing multiply-add operations on data elements in the first and second packed data.
摘要:
Method, apparatus and system embodiments provide support for multiple SoEMT software threads on multiple SMT logical thread contexts. A sleep state mechanism maintains a current value of an element of architecture state for each physical thread. The current value corresponds to an active virtual thread currently running on the physical thread. The sleep state mechanism also maintains sleep values of the architecture state element for each inactive thread. The active and inactive values may be maintained in a cross-bar configuration. Upon a read of the architecture state element, simplified mux logic selects among the current values to provide the current value for the appropriate active thread. Upon a thread switch, control logic associated with the sleep state mechanism swaps the active state value for the current thread with the inactive state value for the new thread.
摘要:
When legacy instructions, that can only operate on smaller registers, are mixed with new instructions in a processor with larger registers, special handling and architecture are used to prevent the legacy instructions from causing problems with the data in the upper portion of the registers, i.e., the portion that they cannot directly access. In some embodiments, the upper portion of the registers are saved to temporary storage while the legacy instructions are operating, and restored to the upper portion of the registers when the new instructions are operating. A special instruction may also be used to disable this save/restore operation if the new instruction are not going to use the upper part of the registers.