摘要:
An addressing method and system for accessing a very large size physical buffer by a number of processes. The novel system is applicable within a computer system having an n-bit computer operating system (e.g., where n is 16, 32, 64, etc.). The addressing method allocates a relatively small window of virtual address space, for each software process, which is used to access the very large physical buffer using a relatively small amount of operating system memory overhead. A page frame number (PFN) table of the system address space maintains a listing of physical memory pages used to define the very large physical buffer. The PFN table is used by each process to translate between a relative page number (RPN) and an address of a physical memory page containing the record. The virtual address space ("window") of each process is used to access the physical memory buffer and contains a hash table, a virtual access control block (VACB) free list, and a VACB table. Entries of the VACB table indicate addresses of virtual memory for the process. Each process also has an associated private page table entry (PTE) table which maintains a mapping between its virtual pages and the physical pages. To map a record, its RPN is determined and used to obtain the address of the physical page(s) in which the record resides. The free list supplies an entry of the VACB table containing a virtual address for the record. The virtual address and the physical address are mapped into the PTE table.
摘要:
An addressing method and computer system for sharing a large memory address space using address space within an operating system's virtual address space. The system provides sharing the SSB over many processes without the disadvantages associated with process based global sections. For instance, the novel system does not require that each process maintain its own dedicated page table entries (PTEs) in order to access the SSB thereby requiring less operating system virtual memory to maintain the PTE data structures. The system uses a process to switch to kernel mode, then identifies those sections of the operating system virtual memory space that are not being used; in some cases the unused address space can be 1.5-1.8 gigabytes in size. The unused address space is linked together to form the SSB. The system alters the privileges of the PTEs corresponding to the SSB so that user mode processes can access this usually protected operating system virtual memory space. The result is a statically mapped large memory address buffer (SSB) that can be immediately shared by all processes within the computer system while consuming only a single statically mapped PTE which all processes can use. In one example, 500 processes mapping to a 2 gigabyte SSB requires only 2 megabytes of memory storage for the corresponding PTEs, assuming conventional memory page sizes. In one example, the SSBs are allocated from a system space virtual memory map which is 2 gigabytes in size in a 32-bit VMS operating system.
摘要:
A method for selectively caching data in a computer network. Initially, data objects which are anticipated as being accessed only once or seldomly accessed are designated as being exempt from being cached. When a read request is generated, the cache controller reads the requested data object from the cache memory if it currently resides in the cache memory. However, if the requested data object cannot be found in the cache memory, it is read from a mass storage device. Thereupon, the cache controller determines whether the requested data object is to be cached or is exempt from being cached. If the data object is exempt from being cached, it is loaded directly into a local memory and is not stored in the cache. This provides improved cache utilization because only objects that are used multiple times are entered in the cache. Furthermore, processing overhead is minimized by reducing unnecessary cache insertion and purging operations. In addition, I/O operations are minimized by increasing the likelihood that hot objects are retained in the cache longer at the expense of infrequently used objects.
摘要:
A method for performing a checkpointing operation in a client/server computer system for safeguarding data in case of a failure. The records of a database are stored in a mass storage device, such as a hard disk drive array. A separate disk drive is dedicated for use only in conjunction with checkpointing. Periodically, when a checkpoint process is initiated, the server writes a number of its modified records to checkpoint files which are stored by the dedicated checkpoint disk drive. The write operation is performed through one or more sequential I/O operations. Thus, the modified records are stored in consecutive sectors of the hard disk drive. If the server becomes disabled, the data can be recovered by reading the contents of the most recent checkpoint files and loading the contents sequentially back to the server's main memory.
摘要:
In a relational database management system (RDBMS), a method of issuing input/output tasks (I/Os) which store record information from a buffer to an after image journal (AIJ) file of a durable disk (the AIJ device) where a group commit size is dynamically adapted to the workload of the AIJ device and to the character and volume of data written to the AIJ file. Record information contains data records (including roll back records) and/or commit records that together form database transactions. A commit record written to the AIJ file indicates that data modifications and/or additions embodied in data records associated with the commit record are durable in the RDBMS in that they are stored in a durable media and are recoverable. Rather than issuing I/Os to the disk based on a fixed timer or a fixed record count, the system writes to the AIJ file based on three workload characteristics: 1) the character of the record information received (data or commit record); 2) the AIJ file throughput, measured based on the buffer contents; and 3) the workload of the I/O device (busy or idle). The present invention avoids making a data dependent server process wait unnecessarily if the AIJ device is not busy, thereby improving response time without overloading the AIJ, and minimizes AIJ I/O in heavy workload situations by making the group commit size as large as possible without idling the AIJ device. The system adapts to a changing workload to provide improved response time and throughput.
摘要:
In a client/server computer system, a method for writing modified data in a cache memory back to a database residing in a hard disk drive. Rather than writing back all of the modified data as part of a checkpointing operation, the present invention designates an amount of the cache memory that is to be cleared for a pre-determined time interval. The amount of cache memory to be cleared is based on an estimate of how much new data is anticipated to be cached. Thereby, just enough memory is cleared and made available so as to accommodate the storage of new data. For the modified data which are to be written back to the database, a lazy writeback operation is utilized.
摘要:
An arbitration procedure allowing processes and their associated processors to perform useful work while they have pending service requests for access to shared resources within a multi-processor system environment. The arbitration procedure of the present invention is implemented within a multi-processor system (e.g., a symmetric multi-processor system) wherein multiple processes can simultaneously request "locks" which control access to shared resources such that access to these shared resources are globally synchronized among the many processes. Rather than assigning arbitration to the operating system, the present invention provides an arbitration procedure that is application-specific. This arbitration process provides a reservation mechanism for contending processes such that any given process only requests a lock call to the operating system when a lock is available for that process, thereby avoiding spinlock by the operating system. During the period between a lock request and a lock grant, a respective process is allowed to perform other useful work that does not need access to the shared resource. Alternatively during this period, the processor executing the respective process can execute another process that performs useful work that does not need the shared resource. Each process requesting a lock grant is informed of the expected delay period, placed on a reservation queue, and assigned a reservation identifier. After releasing the lock, the process uses the reservation queue to locate the next pending process to receive the lock.
摘要:
Methods for accurate transmission of ELIN/callback number from an emergency caller calling from behind a PBX/MLTS include prioritizing the emergency call, assigning a port equipment number to each device/trunk of the PBX/MLTS and associating ports/devices with ELINs and callback numbers. The apparatus of the invention detects an emergency number, assigns the call priority, and uses the port/device number to determine the ELIN/callback number and properly transmit the ELIN/callback number.
摘要:
Methods for accurate transmission of ELIN/callback number from an emergency caller calling from behind a PBX/MLTS include prioritizing the emergency call, assigning a port equipment number to each device/trunk of the PBX/MLTS and associating ports/devices with ELINs and callback numbers. The apparatus of the invention detects an emergency number, assigns the call priority, and uses the port/device number to determine the ELIN/callback number and properly transmit the ELIN/callback number.
摘要:
An apparatus for sensing current in a switching device having resistive voltage-current characteristics includes a first and second power terminal (typically common or ground) for the application therebetween of an operating potential (or alternatively, power), an impedance connected between the first power terminal and a node, a switching device having its main conduction path connected between the node and the second power terminal for controlling the flow of current through the impedance, at least one sense device coupled to the node, operative to sense or divide the potential at the node to thereby provide a sensing potential, where at least one of the sense devices is switched only during at least a portion of the period when the switching device is turned on, and a voltage reference generating circuit operative to generate a reference voltage for comparison with the sensed potential. A method for sensing current in a switching device having resistive voltage-current characteristics, samples the potential at a node connecting the switching device to the impedance only during at least a portion of the period when the switching device is turned on, to generate a sensed potential, generates a reference potential for comparison with the sensed potential.