摘要:
A system for concurrent processing of queries and transactions against a shared database. The system includes multiple processors wherein a processor is available for processing queries and another processor is available for concurrently processing transactions. A query buffer is established for performing the query search while the data accessed by transactions is available in a database cache. Control logic in a database management system distinguishes between transactions and queries and initiates file-read control for reading the file containing the database. File-read control contains logic for logical sequential reading and logical non-sequential reading of the file. Control structures provide a means for control over the load that the query is allowed to place on the system.
摘要:
A system for parallel reading and processing of a file. The system includes multiple disks for storing the file. The disks are coupled to a data processing system via multiple input-output channels. A file buffer is established in the memory of the data processing system, wherein the file buffer is shared by an instruction processor that initiates a parallel read request and manipulates the file data once it is read, and multiple input-output processors that are coupled to the input-output channels. Multiple input requests are issued to the multiple disks to be processed in parallel. The input-output processors write file data to a first portion of the file buffer in parallel with the reading of a second portion of the file buffer by the instruction processor. Control structures provide a means for control over the input processing demands that the parallel read request is allowed to place on the system.
摘要:
A multiprocessor data processing system is implemented with processors, each of which may request for a temporary time the exclusive lock on an object which is stored on a data base. To achieve this a lock processor synchronizes the locking and unlocking of the objects. The requesting processor directs the storage of the object from the data base into a selected high performance storage unit, where it has exclusive rights to modify or write into the object until the object is unlocked by the processor. An audit tape or disk records all modifications made to any object during a transaction. A non-volatile cache memory is inserted in the audit trail to store a before-look image of the object that resides in the high performance storage unit. Data compaction occurs by comparison of the before-look image with an after-look image to provide a difference image, which is supplied to an audit buffer that is coupled to the audit tape. The locking processor may unlock the secured object once the after-look image has been committed from either a stored version in the non-volatile cache or from a high performance main memory unit to the data base disk. The difference image and the after-look image associated with the difference image may then be stored in the non-volatile cache, and provided to the audit tape or disk and the data base disk in a sequence which is independent of the operating sequence of the requesting processor.
摘要:
A data processing system including a first and second host, a first and second outboard file cache connected to the first host, and a first and second secondary storage device connected to the first host. The system operation includes the first host reading file data from the first or second secondary storage device after the data is cached on both the first and second outboard file caches. File data is updated by writing to both first and second outboard file caches. File data is destaged by writing data from the first outboard file cache only, to first and second secondary storage devices. Failure of a single outboard file cache is handled by the first host not reading and writing to the failed outboard file cache. Site-wide failure of first host, first outboard file cache, and first secondary storage device is handled by establishing communication from second host to both second outboard file cache and second secondary storage device and resuming processing.
摘要:
A record lock processor provides a common facility for control of the locking and unlocking of mass storage objects (for example, records, files, pages or any other logical entity) that is shared by a number of loosely-coupled data processors. The terms "record" or "records" wherever they are used in this document are intended to refer to all such objects, including records, files, pages or any other logical grouping or entity into which the mass storage may be portioned. Each of the data processors has access to all of the shared mass storage. Three Lock Modules all receive the same requests and majority voting techniques are used to determine the result. A fourth lock module is included as a Hot Spare Module. A Maintenance Module receives the same requests as the voting Lock Modules and, therefore, it is able to interpret results on-line based on user requests. Programmable Channel Interfaces provide the operational interface to the host processors. The Lock Modules are also programmable, and they hold lock and Queued Lock Requests and execute locking and unlocking algorithms in response thereto.
摘要:
A Record Lock Processor is utilized in a multi-host data processing system to control the locking of Objects upon request of each of the multiple host data processors in non-conflicting manner. The Record Lock Processor has storage provisions which include a Lock List for storing bits that identify the Objects and bits that identify the requesting processor, a Queue List that stores entries that are formatted like the Lock List entry when a prior Lock List entry has been made for the same Object, and a Cache List for each processor that stores Cache List entries that identify each Object that is stored in the cache memories, each of which Cache List entries is associated with one of the requesting processors, wherein such Cache List entries include validity bits that identify whether each Object stored in a Cache List has a Valid or an Invalid status.
摘要:
Multi-processor computer systems with multiple levels of cache memories are slowed down in trying to process software locks for common functions. This invention obviates the problem for the vast majority of transactions by providing an alternate procedure for handling so-called communal locks differently from ordinary software locks. The alternative procedure is not used for ordinary (non communal software lock) data and instruction transfers. The function of the CSWL (Communal SoftWare Lock) is actually accomplished at a specific cache to which an individual CSWL is mapped to, rather than sending the lock to the requesting process, which also enhances efficiency.
摘要:
Multi-processor computer systems with multiple levels of cache memories are given an alternate pathway for handling highly contended-for locks. These are called communal locks. The alternate pathway allows for alternate processing schemas that do not impede the performance of the overall system as is otherwise the case in such computer systems where contended-for locks bounce back and forth between contending caches, crimping storage bus bandwidth and system performance. The alternative pathway is not used for ordinary (non communal software lock) data and instruction transfers.
摘要:
A method of and apparatus for efficiently providing video on demand services to a cable television subscriber. The provider system consists of two major subsystems. The first subsystem, called a video server, streams video to video on demand subscribers through the cable television network. The second subsystem, called the transaction server, performs virtually all remaining provider functions including, security, accounting, storage and spooling of video data, etc. The video server is preferably uses a Unisys CMP memory platform into which the transaction server spools requested video programs. One or more industry standard processors operating under a standard operating system stream the video data from the memory platform to the subscriber.
摘要:
The present invention overcomes many of the disadvantages associated with the prior art by providing an automated, real time performance monitoring facility which runs periodically as a background process in a computer system. This invention preferably uses performance data collection sites already present in the hardware of the computer system, microcode and/or operating system software. At a user selectable period of time, a sampling of key performance factors is taken from the performance data collection sites. The performance monitor then analyzes the, sampled results by comparing the collected results against two or more performance threshold levels (such as early warning or actual performance limiters) for each performance criteria. If either an actual or early warning performance limiter is detected, an easy-to-understand color coded informational message is provided to a computer operator identifying subsystems that are performance inhibitors along with suggestions of specific upgrade solutions that will address the identified performance problems.