摘要:
Apparatus and method for efficiently sharing data in support of hardware he coherency and coordinated in software with semaphore instructions. Accordingly, a new instruction called "Load-Bias" which, in addition to normal load operations, requests a private copy of the data, and hints to the hardware cache to try to maintain ownership until the next memory reference from that processor. When used with the Cmpxchg instruction semaphore operation, the Load-Bias instruction will reduce coherency traffic, and minimize the possibility of coherency ping-ponging or system deadlock that causes the condition in which no processor is getting useful work done.
摘要:
A method and apparatus for transferring data from a first memory location to a second memory location in a computer system. A load instruction is executed, and, in response, data is transferred from a first memory location to a second memory location during a single bus transaction. During the same bus transaction, a request is made to invalidate a copy of the data that is stored in a third memory location if the load instruction indicates to do so.
摘要:
A method and apparatus calculate a page table index from a virtual address. Employs a combined hash algorithm that supports two different hash page table configurations. A “short format” page table is provided for each virtual region, is linear, has a linear entry for each translation in the region, and does not store tags or chain links. A single “long format” page table is provided for the entire system, supports chained segments, and includes hash tag fields. The method of the present invention forms an entry address from a virtual address, with the entry address referencing an entry of the page table. To form the entry address, first a hash page number is formed from the virtual address by shifting the virtual address right based on the page size of the region of the virtual address. If the computer system is operating with long format page tables, the next step is to form a hash index by combining the hash page number and the region identifier referenced by the region portion of the virtual address, and to form a table offset by shifting the hash index left by K bits, wherein each long format page table entry is 2K bytes long. However, if the computer system is operating with short format page tables, the next step is to form a hash index by setting the hash index equal to the hash page number, and to form a table offset by shifting the hash index left by L bits, wherein each short format page table entry is 2L bytes long. Next, a mask is formed based on the size of the page table. A first address portion is then formed using the base address of the page table and the mask, and a second address portion is formed using the table offset and the mask. Finally, the entry address is formed by combining the first and second address portions. By providing a single algorithm capable of generating a page table entry for both long and short format page tables, the present invention reduces the amount of logic required to access both page table formats, without significantly affecting execution speed.
摘要:
The present invention generally relates to an apparatus and method for efficiently translating virtual addresses utilizing either single address space or multiple address space models in a virtual memory management system. In particular, a Virtual Hash Page Table (VHPT), an extension of the Translation Lookaside Buffer (TLB) hierarchy, is designed to enhance virtual address translation performance. Virtual Hash Page Table (VHPT) efficiently supports two different methods of operating systems use to translate virtual addresses to physical addresses. This directly benefits the highly frequented path of address resolution.
摘要:
The present invention generally relates to an apparatus and method for efficiently translating virtual addresses utilizing either single address space or multiple address space models in a virtual memory management system. In particular, a Virtual Hash Page Table (VHPT), an extension of the Translation Lookaside Buffer (TLB) hierarchy, is designed to enhance virtual address translation performance. Virtual Hash Page Table (VHPT) efficiently supports two different methods of operating systems use to translate virtual addresses to physical addresses. This directly benefits the highly frequented path of address resolution.
摘要:
A method and apparatus pre-validate regions in a virtual addressing scheme by storing both the virtual region number (VRN) bits and region identifiers (RIDs) in translation lookaside buffer (TLB) entries. By storing both the VRN bits and RIDs in TLB entries, the region registers can be bypassed when performing most TLB accesses, thereby removing region registers the critical path of the TLB look-up process and enhancing system performance. A TLB in accordance with the present invention includes entries having a valid field, a region pre-validation valid (rpV) field, a virtual region number (VRN) field, a virtual page number (VPN) field, a region identifier (RID) field, a protection and access attributes field, and a physical page number (PPN) field. In addition, a set of region registers contains the RIDs that are active at any given time. When a virtual-to-physical entry is established for a page in a region having an RID stored in a region register, the RID and VRN are stored in the appropriate fields of the TLB entry. In addition, the valid field is set and the rpV field is set to indicate that the TLB entry contains an active VRN-to-RID mapping, thereby pre-validating the region. When a physical address is translated into a virtual address, a VRN and a VPN are extracted from the virtual address and provided to the TLB. The TLB is searched to find an entry having a set valid field, a set rpV field, and VRN and VPN fields containing entries matching the VRN and VPN extracted from the virtual address. If such an entry is found, the protection and access attributes field is used to determine whether the requested access is allowed. If the requested access is allowed, the PPN from the PPN field of the TLB entry is combined with an offset from the virtual address to produce a physical address that is used to complete the memory access.
摘要:
A translation lookaside buffer (TLB) is provided including a first storage location in the TLB for storing at least a portion of a first virtual to physical memory translation. The first storage location in the TLB is both hardware-managed and software-managed. The TLB also includes a second storage location in the TLB for storing at least a portion of a second virtual to physical memory translation. The second storage location in the TLB is only software-managed.
摘要:
A computing system includes a main memory and an input/output adapter. The input/output adapter accesses a translation map. The translation map maps input/output page numbers to memory address page numbers. Entries to the translation map are generated so that each entry includes an address of a data page in the main memory and transaction configuration information. The transaction configuration information is utilized by the input/output adapter during data transactions to and from the data page.
摘要:
A computing system includes a memory bus, a main memory, an I/O adapter and a processor. The main memory, the I/O adapter and the processor are connected to the bus. The I/O adapter includes a translation map. The translation map maps I/O page numbers to memory address page numbers. The translation map includes coherence indices. The processor includes a cache and an instruction execution means. The instruction execution means generates coherence indices to be stored in the translation map. The instruction execution means performs in hardware a hash operation to generate the coherence indices.
摘要:
A computing system includes a memory bus, an input/output bus, a main memory, and an input/output adapter. The memory bus provides information transfer. The input/output bus also provides information transfer. For example the input/output bus is an input/output bus onto which is connected input/output devices. The main memory is connected to the memory bus. The main memory includes a page directory. The page directory stores translations. Each translation in the page directory includes a portion of an address for data transferred over the input/output bus, for example, the page address portion of I/O bus address. Each translation in the page directory also is indexed by a portion of an address for a memory location within the main memory, for example, the page address portion of the address for the memory location. The input/output adapter is connected to the memory bus and the input/output bus. The input/output adapter includes an input/output translation look-aside buffer. The input/output translation look-aside buffer includes a portion of the translations stored in the page directory.