Abstract:
A method for accessing a non-volatile memory by both PC and X-BOX is disclosed. The method includes the following steps: initializing a memory controller, preferably with a USB interface, for accessing the non-volatile memory; generating the first logical-to-physical address mapping table for mapping the first portion of the non-volatile memory; configuring the controller; monitoring the occurrence of an event; analyzing the starting logic address specified in a token packet sent from the host to the controller in response to the event; and generating the second logical-to-physical address mapping table for mapping the second portion of the non-volatile memory when its starting logic address is absent in the first logical-to-physical address mapping table.
Abstract:
A computer network (1) comprises:- a plurality of processing nodes, at least two of which each having respective addressable memories and respective network interfaces (2); and a switching network (3) which operatively connects the plurality of processing nodes together, each network interface (2) including a memory management unit (8a) having associated with it a memory in which is stored (a) at least one mapping table for mapping 64 bit virtual addresses to the physical addresses of the addressable memory of the respective processing node; and (b) instructions for applying a compression algorithm to said virtual addresses, the at least one mapping table comprising a plurality of virtual addresses and their associated physical addresses ordered with respect to compressed versions of the 64 bit virtual addresses. The network interface (2) provides visibility across the network of areas of the memory of individual processing nodes in a way which supports full scalability of the network.
Abstract:
A memory system and a set of user-level instructions that are callable from user-level code for converting virtual addresses to physical addresses and conveying the physical addresses to peripheral devices without requiring a system call. The system uses a translation look-aside buffer (TLB) implemented in a microprocessor. The contents of the TLB can be updated while processes are executing, allowing for virtual/physical addresses to be constantly updated and loaded into the buffer without requiring that the buffer be too large. Pages in use per transaction or user-level job are nullpinned downnull and pinned page counts per transaction or user-level job, as well as overall counts are maintained.
Abstract:
A cache is configured to receive direct access transactions. Each direct access transaction explicitly specifies a way of the cache. The cache may alter the state of its replacement policy in response to a direct access transaction explicitly specifying a particular way of the cache. The state may be altered such that a succeeding cache miss causes an eviction of the particular way. Thus, a direct access transaction may be used to provide a deterministic setting to the replacement policy, providing predictability to the entry selected to store a subsequent cache miss. In one embodiment, the replacement policy may be a pseudo-random replacement policy. In one embodiment, a direct access transaction also explicitly specifies a cache storage entry to be accessed in response to the transaction. The cache may access the cache storage entry (bypassing the normal tag comparisons and hit determination used for memory transactions) and either read the data from the cache storage entry (for read transactions) or write data from the transaction to the cache storage entry (for write transactions). Other embodiments may set the replacement policy based on other types of transactions.
Abstract:
A method and apparatus for managing shared virtual storage in an information handling system in which each of a plurality of processes managed by an operating system has a virtual address space comprising a range of virtual addresses that are mapped to a corresponding set of real addresses representing addresses in real storage. The virtual address spaces are 64-bit address spaces requiring up to five levels of dynamic address translation (DAT) tables to map their virtual addresses to real addresses. One or more shared ranges of virtual addresses are defined that are mapped for each of a plurality of virtual address spaces to a common set of real addresses. The operating system manages these shared ranges using a system-level DAT table that reference a shared set of DAT tables used by the sharing address spaces for address translation, but is not attached to the hardware address translation facilities or used for address translation. The shared range of virtual addresses straddles the 242-byte boundary between ranges served by different third-level DAT tables and is situated between a lower private range and an upper private range so that an individual address space can map both a lower private range and a shared range using only three levels of DAT tables. Each shared address range may be shared with either global access rights, in which each participating process has the same access rights, or local access rights in which each participant may have different access rights to the given range. Access rights for each participant may be changed over the lifetime of the process.
Abstract:
A method, apparatus, system, and signal-bearing medium that in an embodiment pre-register buffers remotely and create tokens locally that represent the buffers prior to a data transfer operation that uses the tokens to access the buffers. In an embodiment, the buffers are pre-registered via a translation table, and the tokens are used as an offset into the translation table. In an embodiment, the pre-registration verifies that the buffer is within memory allocated to a logical partition, which protects against the risk of address corruption.
Abstract:
Various techniques are described for improving the performance of a multiple node system by allocating, in two or more nodes of the system, partitions of a shared cache. A mapping is established between the data items managed by the system, and the various partitions of the shared cache. When a node requires a data item, the node first determines which partition of the shared cache corresponds to the required data item. If the data item does not currently reside in the corresponding partition, the data item is loaded into the corresponding partition even if the partition does not reside on the same node that requires the data item. The node then reads the data item from the corresponding partition of the shared cache.
Abstract:
A set-associative I-cache that enables early cache hit prediction and correct way selection when the processor is executing instructions of multiple threads having similar EAs. Each way of the I-cache comprises an EA Directory (EA Dir), which includes a series of thread valid bits that are individually assigned to one of the multiple threads. Particular ones of the thread valid bits are set in each EA Dir to indicate when an instruction block the thread is cached within the particular way with which the EA Dir is associated. When a cache line request for a particular thread is received, a cache hit is predicted when the EA of the request matches the EA in the EA Dir and the cache line is selected from the way associated with the EA Dir who has the thread valid bit for that thread set. Early way selection is thus achieved since the way selection only requires a check of the thread valid bits.
Abstract:
A method and system for improving the performance of a processing system is disclosed. The processing system comprises a plurality of host computers, at least one control unit (CU) coupled to the host computer. The control unit comprises a cache and disk array coupled to the CU. The method and system comprises querying an operating system of at least one host computer to determine the storage medium that contains an object to be cached and providing the data in the portion of the disk array to be cached. The method and system further comprises providing a channel command sequence and sending the channel command sequence to the CU via an I/O operation at predetermined time intervals until the object is deactivated. A method and system in accordance with the present invention instructs a control unit (CU) or a storage medium to keep some objects constantly in its cache, so as to improve the overall response time of transaction systems running on one or more host computer and accessing data on disk via the CU.
Abstract:
Data in less than all of the pages of a non-volatile memory block are updated by programming the new data in unused pages of either the same or another block. In order to prevent having to copy unchanged pages of data into the new block, or to program flags into superceded pages of data, the pages of new data are identified by the same logical address as the pages of data which they superceded and a time stamp is added to note when each page was written. When reading the data, the most recent pages of data are used and the older superceded pages of data are ignored. This technique is also applied to metablocks that include one block from each of several different units of a memory array, by directing all page updates to a single unused block in one of the units.