摘要:
A new scheduling method and policy for shared (server) resources, such as the CPU or disk memory of a multiprogrammed data processor. The scheduling is referred to as Move-To-Rear List Scheduling and it provides a cumulative service guarantee and well as more traditional guarantees such as fairness (proportional sharing) and bounded delay. In typical operation, a list is maintained for a server of processes seeking service from the server. Processes are admitted to the list only when maximum capacity constraints are not violated, and once on the list, are served in a front-to-back order. After receiving service, or upon the occurrence of other events, the position of the process on the list may be changed.
摘要:
A multimedia on-demand server including a randomly accessible library of multimedia programs (such as movies stored on magnetic or optical disks), a limited amount of RAM to buffer and store selected portions of programs retrieved from the library, and an interface that switchably routes program material from the library and RAM buffers to an audience of viewers. The server employs a restricted retrieval strategy and a novel storage allocation scheme that enable different portions of one or more programs to be continuously retrieved and selectively routed to a large number of on-demand viewers, while at the same time minimizing the amount of the RAM required to effect this service. The on-demand server also responds to viewer-generated commands to control the viewing of a program. In a particular embodiment, these commands include video tape player-like operations such as fast-forward, rewind and pause.
摘要:
A computer operating system that allows legacy applications to be run automatically with quality of service (QoS) guarantees matching required QoS performance levels. In accordance with the invention, files have QoS requirement attributes that can be set-by users. Additionally, users may interpose a requirement broker between a given legacy application and the operating system. The requirement broker may be in the form of a modified version of a library that is dynamically linked with applications at load time. The requirement broker intercepts certain system calls and automatically requests from the system QoS guarantees in accordance with the QoS requirement attributes of the accessed files, whether local or remote.
摘要:
A system and method for discovering association rules that display regular cyclic variation over time is disclosed. Such association rules may apply over daily, weekly or monthly (or other) cycles of sales data or the like. A first technique, referred to as the sequential algorithm, treats association rules and cycles relatively independently. Based on the interaction between association rules and time, we employ a new technique called cycle pruning, which reduces the amount of time needed to find cyclic association rules. A second algorithm, the interleaved algorithm, uses cycle pruning and other optimization techniques for discovering cyclic association rules with reduced overhead.
摘要:
A continuous media server that provides support for the storage and retrieval of continuous media data at guaranteed rates using one of two fault-tolerant approaches that rely on admission control in order to meet rate guarantees in the event of a failure of the data storage medium that renders part of the continuous media inaccessible. In the first approach, a declustered parity storage scheme is used to uniformly distribute the additional load caused by a disk failure, uniformly across the disks. Contingency bandwidth for a certain number of clips is reserved on each disk in order to retrieve the additional blocks. In the second approach, data blocks in a parity group are prefetched and thus in the event of a disk failure only one additional parity block is retrieved for every data block to be reconstructed. While the second approach generates less additional load in the event of a failure, it has higher buffer requirements. For the second approach, parity blocks can either be stored on a separate parity disk, or distributed among the disks with contingency bandwidth reserved on each disk.
摘要:
A method and an apparatus are disclosed for providing enhanced pay per view in a video server. Specifically, the present invention periodically schedules a group of non pre-emptible tasks corresponding to videos in a video server having a predetermined number of processors, wherein each task is defined by a computation time and a period. To schedule the group of tasks, the present invention divides the tasks into two groups according to whether they may be scheduled on less than one processor. The present invention schedules each group separately. For the group of tasks scheduleable on less than one processor, the present invention conducts a first determination of scheduleability. If the first determination of scheduleability deems the group of tasks not scheduleable, then the present invention conducts a second determination of scheduleability. If the second determination of scheduleability also deems the group of tasks not scheduleable, then the present invention recursively partitions the group of tasks in subsets and re-performs the second determination of scheduleability. Recursive partitioning continues until the group of tasks is deemed scheduleable or no longer partitionable. In the latter case, the group of tasks is deemed not scheduleable.
摘要:
Two methods are disclosed for storing multimedia data that reduces the amount of disk I/O required by the system and cache misses experienced by the system. The first method determines the future access of each data buffer in a cache memory. Once the future of the data buffer is determined, the data buffer with the maximum future is allocated to store new blocks of data. The method approximates an optimal method of data buffer allocation, by calculating the future of a data buffer, relative to clients that will access the data buffers. The second method orders the clients based on the increasing distance of each client from the previous client; clients release the buffers in this order into a LIFO queue; if a buffer is selected to load a new block of data, the buffer at the head of the LIFO queue is selected.
摘要:
Buffer space and disk bandwidth resources in a continuous media server are continuously re-allocated in order to optimize the number of continuous media requests which may be concurrently serviced at guaranteed transfer rates using on demand paging. Disk scheduling is provided to ensure that whenever an admitted request references a page of data, the page is available in a buffer for transfer to a client. Data for continuous media data files are stored on disk or held in the buffer to eliminate disk bandwidth limitations associated with concurrently servicing any number or combination of requests, provided buffer space is sufficient. Multiple requests for continuous media data files are selectively included in groups for servicing in order to provide that buffer and disk bandwidth requirements are maintained at a minimum and within available resource capabilities.
摘要:
A system for the effective resource scheduling of composite multimedia objects involves a sequence packing formulation of the composite object scheduling problem and associated efficient algorithms using techniques from pattern matching and multiprocessor scheduling. An associated method of scheduling the provision of composite multimedia objects, each comprising one or more continuous media streams of audio data, video data and other data, where the continuous media streams are of varying bandwidth requirement and duration comprise the steps of; generating composite multimedia objects from the continuous media streams and determining a run-length compressed form for each of the generated composite multimedia objects.
摘要:
A method for managing a buffer pool containing a plurality of queues is based on consideration of both (a) when to drop a packet and (b) from which queue the packet should be dropped. According to the method a packet drop is signaled with the global average queue occupancy exceeds a maximum threshold and is signaled on a probabilistic basis when the global occupancy is between a minimum threshold and the maximum threshold. Each queue has a particular local threshold value associated with it and is considered to be “offending” when its buffer occupancy exceeds its local threshold. When a packet drop is signaled, one of the offending queues is selected using a hierarchical, unweighted round robin selection scheme which ensures that offending queues are selected in a fair manner. A packet is then dropped from the selected offending queue.