摘要:
A cache management scheme is disclosed for buffering one or more continuous media files being simultaneously accessed from a continuous media server by a plurality of media clients. The continuous media server stores pages of data that will likely be accessed in a cache or buffer. The continuous media server implements a cache management strategy that exploits the sequential page access patterns for continuous media data, in order to determine the buffer pages to be replaced from the cache. The cache management strategy initially identifies unpinned pages as potential victims for replacement. Each unpinned page is evaluated by the continuous media server and assigned a weight. Generally, the assigned weight ensures that a buffer with a larger weight will be accessed by a client later in time than a buffer with a smaller weight. A page associated with a larger weight will be accessed later and hence, is replaced earlier. A current buffer list is preferably allocated to monitor the buffer pages associated with a given continuous media file. The current buffer list is a data structure pointing to a set of buffer pages in the cache buffer containing the currently buffered pages of the associated continuous media file. Each buffer page in the buffer cache is preferably represented by a buffer header. The current buffer list (CBL) data structure preferably stores, among other things, a pointer to the buffer pages associated with the CBL, identifier information for the CBL and related continuous media file, and information regarding the number of buffered pages associated with the CBL and number of clients currently accessing the associated continuous media file. The buffer header is a data structure containing information describing the state of the corresponding page. The buffer header preferably includes, among other things, a pointer to an actual area of the buffer cache storing a page of data, a number of pointers to create various relationships among the various pages in a CBL, and a fixed count indicating the number many of clients currently accessing the corresponding page of the continuous media file.
摘要:
Buffer space and disk bandwidth resources in a continuous media server are continuously re-allocated in order to optimize the number of continuous media requests which may be concurrently serviced at guaranteed transfer rates using on demand paging. Disk scheduling is provided to ensure that whenever an admitted request references a page of data, the page is available in a buffer for transfer to a client. Data for continuous media data files are stored on disk or held in the buffer to eliminate disk bandwidth limitations associated with concurrently servicing any number or combination of requests, provided buffer space is sufficient. Multiple requests for continuous media data files are selectively included in groups for servicing in order to provide that buffer and disk bandwidth requirements are maintained at a minimum and within available resource capabilities.
摘要:
A multimedia on-demand server including a randomly accessible library of multimedia programs (such as movies stored on magnetic or optical disks), a limited amount of RAM to buffer and store selected portions of programs retrieved from the library, and an interface that switchably routes program material from the library and RAM buffers to an audience of viewers. The server employs a restricted retrieval strategy and a novel storage allocation scheme that enable different portions of one or more programs to be continuously retrieved and selectively routed to a large number of on-demand viewers, while at the same time minimizing the amount of the RAM required to effect this service. The on-demand server also responds to viewer-generated commands to control the viewing of a program. In a particular embodiment, these commands include video tape player-like operations such as fast-forward, rewind and pause.
摘要:
A method and apparatus are disclosed for providing enhanced pay per view in a video server. Specifically, the present invention periodically schedules a group of non pre-emptible tasks corresponding to videos in a video server having a predetermined number of processors, wherein each task begins at predetermined periods and has a set of sub-tasks separated by predetermined intervals. To schedule the group of tasks, the present invention divides the tasks into two groups according to whether they may be scheduled on a single processor. The present invention schedules each group separately. For the group of tasks not scheduleable on a single processor, the present invention determines a number of processors required to schedule such group and schedules such tasks to start at a predetermined time. For the group of tasks scheduleable on a single processor, the present invention determines whether such tasks are scheduleable on the available processors using an array of time slots. If the present invention determines that such group of tasks are not scheduleable on the available processors, then the present invention recursively partitions such group of tasks in subsets and re-performs the second determination of scheduleability. Recursive partitioning continues until the group of tasks is deemed scheduleable or no longer partitionable. In the latter case, the group of tasks is deemed not scheduleable.
摘要:
A continuous media server that provides support for the storage and retrieval of continuous media data at guaranteed rates using one of two fault-tolerant approaches that rely on admission control in order to meet rate guarantees in the event of a failure of the data storage medium that renders part of the continuous media inaccessible. In the first approach, a declustered parity storage scheme is used to uniformly distribute the additional load caused by a disk failure, uniformly across the disks. Contingency bandwidth for a certain number of clips is reserved on each disk in order to retrieve the additional blocks. In the second approach, data blocks in a parity group are prefetched and thus in the event of a disk failure only one additional parity block is retrieved for every data block to be reconstructed. While the second approach generates less additional load in the event of a failure, it has higher buffer requirements. For the second approach, parity blocks can either be stored on a separate parity disk, or distributed among the disks with contingency bandwidth reserved on each disk.
摘要:
A method and an apparatus are disclosed for providing enhanced pay per view in a video server. Specifically, the present invention periodically schedules a group of non pre-emptible tasks corresponding to videos in a video server having a predetermined number of processors, wherein each task is defined by a computation time and a period. To schedule the group of tasks, the present invention divides the tasks into two groups according to whether they may be scheduled on less than one processor. The present invention schedules each group separately. For the group of tasks scheduleable on less than one processor, the present invention conducts a first determination of scheduleability. If the first determination of scheduleability deems the group of tasks not scheduleable, then the present invention conducts a second determination of scheduleability. If the second determination of scheduleability also deems the group of tasks not scheduleable, then the present invention recursively partitions the group of tasks in subsets and re-performs the second determination of scheduleability. Recursive partitioning continues until the group of tasks is deemed scheduleable or no longer partitionable. In the latter case, the group of tasks is deemed not scheduleable.
摘要:
Two methods are disclosed for storing multimedia data that reduces the amount of disk I/O required by the system and cache misses experienced by the system. The first method determines the future access of each data buffer in a cache memory. Once the future of the data buffer is determined, the data buffer with the maximum future is allocated to store new blocks of data. The method approximates an optimal method of data buffer allocation, by calculating the future of a data buffer, relative to clients that will access the data buffers. The second method orders the clients based on the increasing distance of each client from the previous client; clients release the buffers in this order into a LIFO queue; if a buffer is selected to load a new block of data, the buffer at the head of the LIFO queue is selected.
摘要:
A method for retrieving video data which has been striped across a plurality of disks using a coarse-grained striping technique. Specifically, and in accordance with an illustrative embodiment of the present invention, the method comprises scheduling the retrieval of a video in response to an incoming request and based on the availability of bandwidth on the disks, and then rescheduling the retrieval of that video to occur at an earlier time, the rescheduling based on a change (i.e., an increase) in the availability of bandwidth on the disks which results from the retrieval of another video being completed. The scheduling and rescheduling may, for example, comprise assigning a disk to the video, where the method further comprises incrementing the disk assigned to the video as each round occurs and beginning the retrieval of the given video when the disk assigned to it is the disk on which the data for the given video begins.
摘要:
Retrieval of both continuous and non-continuous media data is performed concurrently for multiple requests, where servicing of continuous media data requests at varying rate requirements is guaranteed within a common retrieval period. The common period is selected with respect to the available buffer space and the total disk retrieval times required for servicing multiple requests. Servicing of requests is re-commenced immediately after all admitted requests have been serviced, regardless if the common period has elapsed. High throughput is obtained and transfer rates for a large number of real-time requests are guaranteed by reducing seek latency and eliminating rotational latency so that the buffering requirements for requests are reduced. Disk scheduling techniques are applied for disks having transfer rates which vary from one track to another. Non-continuous media requests are serviced concurrently with continuous media requests by reserving a certain portion of the disk bandwidth for non-continuous media requests in order to provide for low response times. Changing workloads are accommodated by dynamically varying the buffer space allocated to continuous media and non-continuous media requests and by using the time allocated but not utilized for continuous media requests for servicing non-continuous media requests. The disk scheduling techniques are also applicable for disks having varying track sizes and where clips are stored on tracks non-contiguously.
摘要:
A method for retrieving video data from a video server, the video data having been stored on a plurality of disks based on a disk striping technique. In accordance with one illustrative embodiment, the method comprises the steps of retrieving a predetermined number of bits from the plurality of disks in the video server, and storing that predetermined number of bits in a buffer memory, wherein the number of bits retrieved and stored is based on the number of disks and on the capacity of the buffer memory. These steps, which together may illustratively constitute one round of the video retrieval process, may be repeated until the entire video has been retrieved and, for example, transmitted to the intended recipient(s) at a required transmission rate.