Abstract:
The present invention relates to a mechanism for updating a fully associative array which is used to store entries associated with speculated instructions. Preferably, the array includes a plurality of data banks for storing entries, a plurality of ports for writing to the plurality of data banks, pointers associated with the respective banks for identifying table locations suitable for overwriting by upcoming entries, wherein an entry is suitable for overwriting when it is deemed invalid by the inventive system. A preferred embodiment is disclosed involving two ports writing to two banks wherein a plurality of factors is considered in deciding where prospective entries from the two ports will be written in the table. The Factors include, matches between existing and prospective entries, the default designated data bank for a given port, whether two write operations are being attempted simultaneously, and the number of entries already present in each data bank.
Abstract:
The system is a method and an apparatus for resteering failing speculation check instructions in the pipeline of a processor. A branch offset immediate value and an instruction pointer correspond to each failing instruction. These values are used to determine the correct target recovery address. A relative adder adds the immediate value and the instruction pointer value to arrive at the target recovery address. This is done by flushing the pipeline upon the occurrence of a failing speculation check instruction. The pipeline flush is extended to allow the instruction stream to be resteered. The immediate value and the instruction pointer are then routed through the existing data paths of the pipeline, into the relative adder, which calculates the correct address. A sequencer tracks the progression of these values through the pipeline and causes a branch at the desired time.
Abstract:
In an embodiment, a page miss handler includes paging caches and a first walker to receive a first linear address portion and to obtain a corresponding portion of a physical address from a paging structure, a second walker to operate concurrently with the first walker, and a logic to prevent the first walker from storing the obtained physical address portion in a paging cache responsive to the first linear address portion matching a corresponding linear address portion of a concurrent paging structure access by the second walker. Other embodiments are described and claimed.
Abstract:
An exemplary aspect comprises receiving data related to an underlying asset; calculating values corresponding to near-term implied volatility and realized volatility for the underlying asset; and transmitting data sufficient to describe an index based on a difference between the values corresponding to the near-term implied volatility and the realized volatility for the underlying asset. Another exemplary aspect comprises receiving electronic data related to an underlying asset; calculating data sufficient to describe a plurality of call options and a plurality of put options related to the underlying asset and written on a first settlement date; crediting an account with proceeds from selling the call and put options; and debiting the account to settle one or more of the options that are in-the-money on a second settlement date. Other aspects are apparent from the description and claims.
Abstract:
Systems, methodologies, media, and other embodiments associated with mitigating the effects of context switch cache and TLB misses are described. One exemplary system embodiment includes a processor configured to run a multiprocessing, virtual memory operating system. The processor may be operably connected to a memory and may include a cache and a translation lookaside buffer (TLB) configured to store TLB entries. The exemplary system may include a context control logic configured to selectively copy data from the TLB to the data store for a first process being swapped out of the processor and to selectively copy data from the data store to the TLB for a second process being swapped into to the processor.
Abstract:
Duplicate record processing is enabled employing on customizable rules. Detected duplicate records are merged, deleted, deactivated, or moved based on one or more sets of customizable rules. Different rule sets may be used for each record type, or a rule set reused for different records. Hierarchical relationships between master and child records are adjusted upon duplicate processing based on rules and/or record attributes.
Abstract:
A processing unit of the invention has multiple instruction pipelines for processing multi-threaded instructions. Each thread may have an urgency associated with its program instructions. The processing unit has a thread switch controller to monitor processing of instructions through the various pipelines. The thread controller also controls switch events to move from one thread to another within the pipelines. The controller may modify the urgency of any thread such as by issuing an additional instruction. The thread controller preferably utilizes certain heuristics in making switch event decisions. A time slice expiration unit may also monitor expiration of threads for a given time slice.
Abstract:
The present invention relates to an improved microprocessor having a memory system with several caches that can be operated to provide virtual memory. Among the caches included in the microprocessor are conventional caches that store data and instructions to be utilized by the processes being performed by the microprocessor, and that are typically arranged in a cache hierarchy, as well as one or more translation lookaside buffer (TLB) caches that store a limited number of virtual page translations. The improved microprocessor also has an additional cache that serves to store a virtual hash page table (VHPT) that is accessed when TLB misses occur. The introduction of this VHPT cache eliminates or at least reduces the need for the microprocessor to look for information within the caches of the cache hierarchy or in other memory (e.g., main memory) outside of the microprocessor when TLB misses occur, and consequently enhances microprocessor speed.
Abstract:
In a computer system including a plurality of resources, techniques are disclosed for receiving a request from a software program to access a specified one of the plurality of resources, determining whether the specified one of the plurality of resources is a protected resource, and, if the specified one of the plurality of resources is a protected resource, for denying the request if the computer system is operating in a protected mode of operation, and processing the request based on access rights associated with the software program if the computer system is not operating in the protected mode of operation.
Abstract:
A method and apparatus that utilizes a simple test and flush mechanism to implement branch instructions of one Instruction Set Architecture (ISA) using instructions of another ISA is described. During the decoding and sequencing of microinstructions to implement a branch instruction, a fix-up address, which represents the remedial branch target in the event of a mispredicted target or branch condition, is determined and stored. A test condition is set to determine if the prediction or the branch condition was correct. When the test condition fails, the instruction execution pipeline is immediately flushed to avoid executing any instruction remaining in the pipeline following the branch instructions. The flushing of the pipeline signals the instruction fetch control mechanism to redirect the instruction flow to the instruction corresponding to the fix-up address. A method and apparatus according to the present invention further allows flushing of the pipeline when conditions other than ones involved in branch instructions occurs, e.g., to flush stale instructions.