摘要:
A video on demand computer system includes a plurality of storage media each storing a plurality of videos. The storage media are disks attached to a computer system. The computer system plays the videos on demand by reading out the videos from the disks as data steams to play selected ones of the videos for users responsive to received user performance requests. The computer system is programmed to monitor the numbers of videos being performed for each of the disks. Based on the monitoring function performed by the computer system, the computer system performs a load balancing function by transferring the performance of a video in progress from one of the disks to another disk having a copy of the video in progress. The computer system periodically performs a reassignment function to transfer videos between the disks to optimize load balancing based on the user performance requests for each of the videos. There are two phases to the load balancing performed by the computer system; a static phase and a dynamic phase. In the static phase, videos are assigned to memory and disks, and in the dynamic phase there is provided a scheme for playing videos with minimal and balanced loads on the disks. The static phase supports the dynamic phase which insures optimal real-time operation of the system. Dynamic phase load balancing is accomplished by a process of baton passing.
摘要:
A plurality of queries (jobs) which consist of sets of tasks with precedence constraints between them are optimally scheduled in two stages of scheduling for processing on a parallel processing system. In a first stage of scheduling, multiple optimum schedules are created for each job, one optimum schedule for each possible number of processors which might be used to execute each job, and an estimated job execution time is determined for each of the optimum schedules created for each job, thereby producing a set of estimated job execution times for each job which are a function of the number of processors used for the job execution. Precedence constraints between tasks in each job are respected in creating all of these optimum schedules. Any known optimum scheduling method for parallel processing tasks that have precedence constraints among tasks may be used but a novel preferred method is also disclosed. The second stage of scheduling utilizes the estimated job execution times determined in the first stage of scheduling to create an overall optimum schedule for the jobs. The second stage of scheduling does not involve precedence constraints because the precedence constraints are between tasks within the same job and not between tasks in separate jobs, so jobs may be scheduled without observing any precedence constraints. Any known optimum scheduling method for the parallel processing of jobs that have no precedence constraints may be used, but a novel preferred method is also disclosed.
摘要:
A multi-processor computer system in which each processor is under the control of separate system software and access a common database. A two level lock management system is used to prevent data corruption due to unsychronized data access by the multiple processors. By this system, subsets of data in the database are assigned respectively different lock entities. Before a task running on one of the processors access data in the database it first requests permission to access the data in a given mode with reference to the appropriate lock entity. A first level lock manager handles these requests synchronously, using a simplified model of the locking system having shared and exclusive lock modes to either grant or deny the request. All requests are then forwarded to a second level lock manager which grants or denies the request based on a more robust model of the locking system and queues denied requests. The denied requests are granted, in turn, as the tasks which have been granted access finish processing data in the database.
摘要:
A protocol for a switching system that establishes multiple parallel paths between users through multiple autonomous switching planes by having a user desiring connection to issue connection requests to each of the switching planes. According to the invention, the user monitors the number of connections that have been successfully completed and if only some of the connections have been completed, because of conflicting requests, it follows a conflict protocol to issue retry requests to the planes on which the connection request was unsuccessful. Each switching plane follows the conflict protocol to respond to the retry request by disconnecting existing connections and completing at most one retried connection request.
摘要:
Various embodiments for maintaining security and confidentiality of data and operations within a fraud detection system. Each of these embodiments utilizes a secure architecture in which: (1) access to data is limited to only approved or authorized entities; (2) confidential details in received data can be readily identified and concealed; and (3) confidential details that have become non-confidential can be identified and exposed.
摘要:
Streaming environments typically dictate incomplete or approximate algorithm execution, in order to cope with sudden surges in the data rate. Such limitations are even more accentuated in mobile environments (such as sensor networks) where computational and memory resources are typically limited. Introduced herein is a novel “resource adaptive” algorithm for spectrum and periodicity estimation on a continuous stream of data. The formulation is based on the derivation of a closed-form incremental computation of the spectrum, augmented by an intelligent load-shedding scheme that can adapt to available CPU resources. Experimentation indicates that the proposed technique can be a viable and resource efficient solution for real-time spectrum estimation.
摘要:
One embodiment of the present method and apparatus adaptive in-operator load shedding includes receiving at least two data streams (each comprising a plurality of tuples, or data items) into respective sliding windows of memory. A throttling fraction is then calculated based on input rates associated with the data streams and on currently available processing resources. Tuples are then selected for processing from the data streams in accordance with the throttling fraction, where the selected tuples represent a subset of all tuples contained within the sliding window.
摘要:
A system and method are provided for optimizing component composition in a distributed stream-processing environment having a plurality of nodes capable of being associated with one or more of a plurality of stream processing components. The system includes an adaptive composition probing (ACP) module and a hierarchical state manager. The ACP module probes a subset of the plurality of stream processing components to determine the optimal component composition in response to a stream processing request. The hierarchical state manager manages local and global information for use by said ACP module in determining the optimal component composition.
摘要:
A computer-implemented method, system, and a computer readable article of manufacture identify local patterns in at least one time series data stream. A data stream is received that comprises at least one set of time series data. The at least one set of time series data is formed into a set of multiple ordered levels of time series data. Multiple ordered levels of hierarchical approximation functions are generated directly from the multiple ordered levels of time series data. A set of approximating functions are created for each level. A current window with a current window length is selected from a set of varying window lengths. The set of approximating functions created at one level in the multiple ordered levels is passed to a subsequent level as a set of time series data. The multiple ordered levels of hierarchical approximation functions are stored into memory after being generated.
摘要:
A computer-implemented method, system, and a computer readable article of manufacture identify local patterns in at least one time series data stream. A data stream is received that comprises at least one set of time series data. The at least one set of time series data is formed into a set of multiple ordered levels of time series data. Multiple ordered levels of hierarchical approximation functions are generated directly from the multiple ordered levels of time series data. A set of approximating functions are created for each level. A current window with a current window length is selected from a set of varying window lengths. The set of approximating functions created at one level in the multiple ordered levels is passed to a subsequent level as a set of time series data. The multiple ordered levels of hierarchical approximation functions are stored into memory after being generated.