摘要:
A network of computer node interface to globally addressable memory system that provides persistent storage of data exchange periodic connectivity information. The exchanged connectivity information provides information regarding node failure to other nodes in the system, and the surviving nodes use the information to determine which node, if any, has ceased functioning. Various processes are used to recover the portion of the global address space for which the failed node was responsible, including RAM directory, disk directory, or file system information. Additionally, nodes may be subdivided into groups and connectivity information is exchanged between nodes belonging to a group. Each group then exchanges group-wise connectivity information and failures may be recovered.
摘要:
Distributed shared memory systems and processes that can connect into each node of a computer network to encapsulate the memory management operations of the connected nodes and to provide thereby an abstraction of a shared virtual memory that can span across each node of the network and that optionally spans across each memory device connected to the computer network. Accordingly, each node on the network having the distributed shared memory system of the invention can access the shared memory.
摘要:
A shared client-side Web cache is provided by implementing a file system shared between nodes. Each browser application stores cached data in files stored in a globally addressable data store. Since the file system is a shared one, the client-side Web caches are also shared.
摘要:
In a network of computer nodes, a structured storage system interfaces to a globally addressable memory system that provides persistent storage of data. The globally addressable memory system may be a distributed shared memory (DSM) system. A control program resident on each network node can direct the memory system to map file and directory data into the shared memory space. The memory system can include functionality to share data, coherently replicate data, and create log-based transaction data to allow for recovery. In one embodiment, the memory system provides memory device services to the data control program. These services can include read, write, allocate, flush, or any other similar or additional service suitable for providing low level control of a memory storage device. The data control program employs these memory system services to allocate and access portions of the shared memory space for creating and manipulating a structured store of data such as a file system, a database system, or a Web page system for storing, retrieving, and delivering objects such as files, database records or information, and Web pages.
摘要:
In a network of computer nodes, a directory service provides both the physical location of directory information around the network and the directory information itself in a single data structure. This single data structure is distributed throughout the network, and continuously redistributed, so as to create a directory service that is both more flexible, and more robust, than prior art directory services.
摘要:
In a network of computer nodes, a directory service provides both the physical location of directory information around the network and the directory information itself in a single data structure. This single data structure is distributed throughout the network, and continuously redistributed, so as to create a directory service that is both more flexible, and more robust, than prior art directory services.
摘要:
A computer system employs a globally addressable storage environment that allows a plurality of networked computers to access data by addressing even when the data is stored on a persistent storage device such as a computer hard disk and other traditionally non-addressable data storage devices. The computers can be located on a single computer network or on a plurality of interconnected computer networks such as two local area networks (LANs) coupled by a wide area network (WAN). The globally addressable storage environment allows data to be accessed and shared by and among the various computers on the plurality of networks.
摘要:
A multiprocessor system has a plurality of processing cells, each including a processor and memory, interconnected via a network. The memories respond to requests by the processors for accessing data and, absent fault, transmitting it in response packets to at least to the requesting processors. A fault containment element responds to at least certain faults during access or transmission of a datum for including within the respective response packet a fault signal that prevents the requestor from accessing the datum. If a fault is detected in a datum not previously detected as faulty, a marking element can include a "marked fault" signal in the response packet. Whereas, it can include an "unmarked fault" signal when it detects a fault associated with a requested datum, but not specifically isolated to that datum. When a request is made for a datum which had previously been detected as faulty, the marking element can include in the response packet a "descriptor fault" signal. This facilitates identification of a particular source of an error and prevents that error from propagating to other processing cells.
摘要:
A digital data processing apparatus has plural processing cells, each with a memory element that stores data page made up of plural subpages. At least one of the cells includes a CPU that can request access to a data subpage. A memory manager responds to selected data access requests by (i) allocating, within the memory local to the requesting CPU, exclusive physical storage space for a data page associated with the requested subpage, and (ii) storing the requested subpage in that allocated space. The apparatus recombines data pages and deallocates them on the basis of usage and access state. The apparatus also accesses data asynchronously with respect to execution of instructions by the CPU.
摘要:
A caching system for a shared bus multiprocessor which includes several processors each having its own private cache memory. Each private cache is connected to a first bus to which a second, higher level cache memory is also connected. The second, higher level cache in turn is connected either to another bus and higher level cache memory or to main system memory through a global bus. Each higher level cache includes enough memory space so as to enable the higher level cache to have a copy of every memory location in the caches on the level immediately below it. In turn, main memory includes enough space for a copy of each memory location of the highest level of cache memories. The caching can be used with either write-through or write-deferred cache coherency management schemes.