High-speed database checkpointing through sequential I/O to disk
    1.
    发明授权
    High-speed database checkpointing through sequential I/O to disk 失效
    通过顺序I / O到磁盘的高速数据库检查点

    公开(公告)号:US5996088A

    公开(公告)日:1999-11-30

    申请号:US787551

    申请日:1997-01-22

    IPC分类号: G06F11/14 G06F11/00

    CPC分类号: G06F11/1451 G06F2201/80

    摘要: A method for performing a checkpointing operation in a client/server computer system for safeguarding data in case of a failure. The records of a database are stored in a mass storage device, such as a hard disk drive array. A separate disk drive is dedicated for use only in conjunction with checkpointing. Periodically, when a checkpoint process is initiated, the server writes a number of its modified records to checkpoint files which are stored by the dedicated checkpoint disk drive. The write operation is performed through one or more sequential I/O operations. Thus, the modified records are stored in consecutive sectors of the hard disk drive. If the server becomes disabled, the data can be recovered by reading the contents of the most recent checkpoint files and loading the contents sequentially back to the server's main memory.

    摘要翻译: 一种用于在客户机/服务器计算机系统中执行检查点操作以在发生故障的情况下保护数据的方法。 数据库的记录存储在大容量存储设备中,例如硬盘驱动器阵列。 单独的磁盘驱动器专用于仅与检查点配合使用。 周期性地,当启动检查点进程时,服务器将其许多修改的记录写入由专用检查点磁盘驱动器存储的检查点文件。 写操作通过一个或多个顺序I / O操作执行。 因此,修改的记录被存储在硬盘驱动器的连续扇区中。 如果服务器被禁用,可以通过读取最新检查点文件的内容并将内容顺序加载回服务器的主内存来恢复数据。

    Method and apparatus for selective data caching implemented with
noncacheable and cacheable data for improved cache performance in a
computer networking system
    2.
    发明授权
    Method and apparatus for selective data caching implemented with noncacheable and cacheable data for improved cache performance in a computer networking system 失效
    用于使用不可缓存和可缓存数据实现的用于选择性数据缓存的方法和装置,用于改善计算机网络系统中的高速缓存性能

    公开(公告)号:US6021470A

    公开(公告)日:2000-02-01

    申请号:US819673

    申请日:1997-03-17

    IPC分类号: G06F12/08 G06F12/00 G06F13/00

    摘要: A method for selectively caching data in a computer network. Initially, data objects which are anticipated as being accessed only once or seldomly accessed are designated as being exempt from being cached. When a read request is generated, the cache controller reads the requested data object from the cache memory if it currently resides in the cache memory. However, if the requested data object cannot be found in the cache memory, it is read from a mass storage device. Thereupon, the cache controller determines whether the requested data object is to be cached or is exempt from being cached. If the data object is exempt from being cached, it is loaded directly into a local memory and is not stored in the cache. This provides improved cache utilization because only objects that are used multiple times are entered in the cache. Furthermore, processing overhead is minimized by reducing unnecessary cache insertion and purging operations. In addition, I/O operations are minimized by increasing the likelihood that hot objects are retained in the cache longer at the expense of infrequently used objects.

    摘要翻译: 一种用于选择性地缓存计算机网络中的数据的方法。 最初,被预期只被访问一次或很少访问的数据对象被指定为不被缓存。 当生成读取请求时,如果高速缓存控制器当前驻留在高速缓冲存储器中,则缓存控制器从高速缓冲存储器读取所请求的数据对象。 然而,如果在高速缓冲存储器中找不到请求的数据对象,则从大容量存储设备读取。 于是,缓存控制器确定所请求的数据对象是被缓存还是被免除缓存。 如果数据对象被免除缓存,则将其直接加载到本地存储器中,并且不存储在缓存中。 这提供了改进的缓存利用率,因为只有多次使用的对象才被输入缓存。 此外,通过减少不必要的高速缓存插入和清除操作来最小化处理开销。 此外,I / O操作通过增加热对象被保留在高速缓存中的可能性更长,而牺牲不经常使用的对象来最小化。

    Addressing method and system for providing access of a very large size
physical memory buffer to a number of processes
    3.
    发明授权
    Addressing method and system for providing access of a very large size physical memory buffer to a number of processes 失效
    用于向多个进程提供非常大的物理内存缓冲区的访问的寻址方法和系统

    公开(公告)号:US5860144A

    公开(公告)日:1999-01-12

    申请号:US695027

    申请日:1996-08-09

    IPC分类号: G06F12/10

    CPC分类号: G06F12/109 G06F12/1009

    摘要: An addressing method and system for accessing a very large size physical buffer by a number of processes. The novel system is applicable within a computer system having an n-bit computer operating system (e.g., where n is 16, 32, 64, etc.). The addressing method allocates a relatively small window of virtual address space, for each software process, which is used to access the very large physical buffer using a relatively small amount of operating system memory overhead. A page frame number (PFN) table of the system address space maintains a listing of physical memory pages used to define the very large physical buffer. The PFN table is used by each process to translate between a relative page number (RPN) and an address of a physical memory page containing the record. The virtual address space ("window") of each process is used to access the physical memory buffer and contains a hash table, a virtual access control block (VACB) free list, and a VACB table. Entries of the VACB table indicate addresses of virtual memory for the process. Each process also has an associated private page table entry (PTE) table which maintains a mapping between its virtual pages and the physical pages. To map a record, its RPN is determined and used to obtain the address of the physical page(s) in which the record resides. The free list supplies an entry of the VACB table containing a virtual address for the record. The virtual address and the physical address are mapped into the PTE table.

    摘要翻译: 一种用于通过多个进程访问非常大的物理缓冲器的寻址方法和系统。 该新颖系统适用于具有n位计算机操作系统(例如,其中n为16,32,64等)的计算机系统。 寻址方法为每个软件进程分配一个相对较小的虚拟地址空间,用于使用较小量的操作系统内存开销访问非常大的物理缓冲区。 系统地址空间的页框号(PFN)表维护用于定义非常大的物理缓冲区的物理内存页的列表。 PFN表被每个进程用于在相对页号(RPN)和包含该记录的物理存储器页的地址之间进行转换。 每个进程的虚拟地址空间(“window”)用于访问物理内存缓冲区,并包含一个哈希表,一个虚拟访问控制块(VACB)空闲列表和一个VACB表。 VACB表的条目表示该进程的虚拟内存的地址。 每个进程还具有关联的专用表表项(PTE)表,其维护其虚拟页和物理页之间的映射。 要映射记录,确定其RPN并用于获取记录所在的物理页面的地址。 空闲列表提供包含记录虚拟地址的VACB表的条目。 虚拟地址和物理地址映射到PTE表中。

    Addressing method and system for sharing a large memory address space
using a system space global memory section
    4.
    发明授权
    Addressing method and system for sharing a large memory address space using a system space global memory section 失效
    使用系统空间全局内存部分共享大型内存地址空间的寻址方法和系统

    公开(公告)号:US5893166A

    公开(公告)日:1999-04-06

    申请号:US847046

    申请日:1997-05-01

    IPC分类号: G06F12/10 G06F12/14

    CPC分类号: G06F12/109 G06F12/1491

    摘要: An addressing method and computer system for sharing a large memory address space using address space within an operating system's virtual address space. The system provides sharing the SSB over many processes without the disadvantages associated with process based global sections. For instance, the novel system does not require that each process maintain its own dedicated page table entries (PTEs) in order to access the SSB thereby requiring less operating system virtual memory to maintain the PTE data structures. The system uses a process to switch to kernel mode, then identifies those sections of the operating system virtual memory space that are not being used; in some cases the unused address space can be 1.5-1.8 gigabytes in size. The unused address space is linked together to form the SSB. The system alters the privileges of the PTEs corresponding to the SSB so that user mode processes can access this usually protected operating system virtual memory space. The result is a statically mapped large memory address buffer (SSB) that can be immediately shared by all processes within the computer system while consuming only a single statically mapped PTE which all processes can use. In one example, 500 processes mapping to a 2 gigabyte SSB requires only 2 megabytes of memory storage for the corresponding PTEs, assuming conventional memory page sizes. In one example, the SSBs are allocated from a system space virtual memory map which is 2 gigabytes in size in a 32-bit VMS operating system.

    摘要翻译: 一种寻址方法和计算机系统,用于在操作系统的虚拟地址空间内使用地址空间共享大的存储器地址空间。 该系统提供了在许多进程上共享SSB,而没有与基于进程的全局部分相关联的缺点。 例如,新颖的系统不要求每个进程维持其自己的专用页表条目(PTE)以便访问SSB,从而需要较少的操作系统虚拟存储器来维护PTE数据结构。 系统使用进程切换到内核模式,然后识别操作系统虚拟内存空间中未使用的那些部分; 在某些情况下,未使用的地址空间大小可以是1.5-1.8千兆字节。 未使用的地址空间链接在一起形成SSB。 系统更改对应于SSB的PTE的权限,以便用户模式进程可以访问此通常受保护的操作系统虚拟内存空间。 结果是一个静态映射的大内存地址缓冲区(SSB),可以由计算机系统内的所有进程立即共享,同时仅消耗所有进程可以使用的单个静态映射PTE。 在一个示例中,假设传统的内存页大小,映射到2 GB的SSB的500个进程对于相应的PTE只需要2兆字节的存储器存储。 在一个示例中,SSB从在32位VMS操作系统中的大小为2 GB的系统空间虚拟内存映射分配。

    Method of sequencing lock call requests to an O/S to avoid spinlock
contention within a multi-processor environment
    5.
    发明授权
    Method of sequencing lock call requests to an O/S to avoid spinlock contention within a multi-processor environment 失效
    将锁定呼叫请求排序到O / S以避免多处理器环境中的自旋锁争用的方法

    公开(公告)号:US5790851A

    公开(公告)日:1998-08-04

    申请号:US843332

    申请日:1997-04-15

    IPC分类号: G06F9/46 G06F9/40

    CPC分类号: G06F9/52

    摘要: An arbitration procedure allowing processes and their associated processors to perform useful work while they have pending service requests for access to shared resources within a multi-processor system environment. The arbitration procedure of the present invention is implemented within a multi-processor system (e.g., a symmetric multi-processor system) wherein multiple processes can simultaneously request "locks" which control access to shared resources such that access to these shared resources are globally synchronized among the many processes. Rather than assigning arbitration to the operating system, the present invention provides an arbitration procedure that is application-specific. This arbitration process provides a reservation mechanism for contending processes such that any given process only requests a lock call to the operating system when a lock is available for that process, thereby avoiding spinlock by the operating system. During the period between a lock request and a lock grant, a respective process is allowed to perform other useful work that does not need access to the shared resource. Alternatively during this period, the processor executing the respective process can execute another process that performs useful work that does not need the shared resource. Each process requesting a lock grant is informed of the expected delay period, placed on a reservation queue, and assigned a reservation identifier. After releasing the lock, the process uses the reservation queue to locate the next pending process to receive the lock.

    摘要翻译: 允许进程及其相关处理器在多处理器系统环境中具有访问共享资源的待处理服务请求时执行有用工作的仲裁程序。 本发明的仲裁程序在多处理器系统(例如,对称多处理器系统)内实现,其中多个进程可以同时请求控制对共享资源的访问的“锁”,使得对这些共享资源的访问是全局同步的 在许多过程中。 本发明不是向操作系统分配仲裁,而是提供特定于应用的仲裁过程。 该仲裁过程为竞争过程提供预留机制,使得任何给定的进程只有在锁可用于该进程时才请求对操作系统的锁调用,从而避免操作系统的自旋锁定。 在锁定请求和锁定授权之间的期间,允许相应的进程执行不需要访问共享资源的其他有用的工作。 或者在此期间,执行相应处理的处理器可以执行执行不需要共享资源的有用工作的另一进程。 通知请求锁定授权的每个进程,放置在预留队列上的预期延迟时间,并分配预留标识符。 释放锁之后,该进程使用预留队列来定位下一个挂起的进程来接收锁。