摘要:
A method for handing off to a second server, in either a fixed or mobile streaming media system, a multiple description streaming session between a first server and either a fixed or mobile client. In one embodiment, the present invention recites selecting a second server to receive a handoff of a multiple description streaming media session between the first server and the client. In this embodiment, the multiple description streaming media session is comprised of a first multiple description bitstream and a second multiple description bitstream. The present embodiment further recites receiving at the second server, the second multiple description bitstream for streaming to the client. This embodiment further recites sending the second multiple description bitstream from the second server to the client.
摘要:
A method for performing a soft-handoff in a mobile streaming media system, and a method for performing a hard-handoff in a mobile streaming media system are is disclosed. In the soft-handoff embodiment, the present invention detects that a channel quality between a mobile client and a first base station remains above a drop threshold and that a channel quality between the mobile client and a second base station increases from below to above an add threshold. The present embodiment then sends a first multiple description bitstream from the first base station to the mobile client and sends a complementary second multiple description bitstream from the second base station to the mobile client. This method thereby provides improved utilization of wireless bandwidth during soft-handoffs, in contrast to conventional systems where the same bitstream is transmitted from each base station. In both the cases of soft-handoffs and hard-handoffs, when a mobile client enters a cell whose base station has no free capacity, the base station may prevent call dropping by reducing the number of descriptions being served to the existing clients and thereby providing capacity (at least one description) for the new client. These methods provide improved utilization of wireless bandwidth during soft-handoffs, and reduced probability of service disruption during both soft-handoffs and hard-handoffs.
摘要:
A method for assigning servers to provide multiple description bitstreams to a mobile client (in a mobile client environment) or to a fixed client (in a fixed client environment). In one embodiment, the present invention, upon receiving a request from a mobile client to have media data streamed thereto, analyzes a plurality of servers to determine a first candidate server for providing a first multiple description bitstream to the base station along a first path. The present method also determines a second candidate server for providing a second multiple description bitstream to the base station along a second path. The present method then sends a request to the first candidate server to provide the first multiple description bitstream to a mobile client through a base station along the first path, and also sends a request to the second candidate server to provide the second multiple description bitstream to the mobile client through the same base station along a second path. In another embodiment, there are two separate paths from two separate servers to two separate base stations and then from each base station there is a separate path to the mobile client. In still another embodiment, there are two paths from a single server to two separate base stations and then from each base station there is a separate path to the mobile client. In one fixed client embodiment, the present invention is able to assign a plurality of servers to provide a plurality of MD bitstreams to the fixed client.
摘要:
A computer system is provided including a processor, a persistent storage device, and a main memory connected to the processor and the persistent storage device. The main memory includes a compressed cache for storing data retrieved from the persistent storage device after compression and an operating system. The operating system includes a plurality of interconnected software modules for accessing the persistent storage device and a filter driver interconnected between two of the plurality of software modules for managing memory capacity of the compressed cache and the buffer cache.
摘要:
A scheduler for a grid computing system includes a node information repository and a node scheduler. The node information repository is operative at a node of the grid computing system. Moreover, the node information repository stores node information associated with resource utilization of the node. Continuing, the node scheduler is operative at the node. The node scheduler is configured to determine whether to accept jobs assigned to the node. Further, the node scheduler includes an input job queue for accepted jobs, wherein each accepted job is launched at a time determined by the node scheduler using the node information.
摘要:
A method and system for resource allocation. Specifically, in one embodiment, a method begins by receiving a request for an interactive session from a user. The request also comprises a resource requirement profile. Then, the method continues by selecting a computing resource having an affinity to the user. The computing resource is selected from a plurality of computing resources that are available to the user. The selected computing resource is implemented to support the interactive session. Thereafter, the selected computing resource is assigned to the user for use in the interactive session.
摘要:
A system and method for controlling access in an interactive grid environment is disclosed. Embodiments of the present invention include a method for controlling remote desktop access provided by an interactive grid computing system comprising determining user policies based on a classification of a user and providing a dynamic user account to the user, wherein the dynamic user account is customized based on the user policies to limit access to resources accessible through a remote desktop.
摘要:
A method and system for adaptively prefetching objects from a network has been disclosed. The invention includes adaptively tuning a prefetch engine to prefetch a plurality of objects from within the network. Because the prefetch engine is adaptively tuned, the prefetch process is optimized, thereby reducing the number of idle cycles that would otherwise be required to retrieve objects from the network. The method and system includes monitoring at least one proxy server within the network, the at least one proxy server comprising a prefetch engine and adaptively tuning the prefetch engine to prefetch a plurality of objects from within the network.
摘要:
A special-purpose appliance (SPA) works in conjunction with a server farm consisting of multiple caching server appliances (CSAs) to supervise a local storage medium (i.e., a shared cache) that is accessible by all the CSAs for storing at least some of the remote objects such as web pages and their embedded objects and/or streaming media objects that have been and/or will be served by one or more of the CSAs to its respective clients. The SPA preferably also determines when to prefetch remote objects such as web pages and their embedded objects and/or streaming media objects that are not currently stored in the shared cache, but which the SPA has determined are likely to be requested in the future by one or more of the CSAs one behalf of one or more of the CSA's respective clients. In that regard, the SPA (and/or PSA) does not merely monitor the file requests from each CSA to the remote servers, but rather monitors and aggregates the individual requests from each client to its respective CSA, for example, by monitoring the access logs of each CSA and using that data to decide what to prefetch into the shared cache from the remote server or servers, what is still of value and needs to be updated, and what is no longer of value and can be replaced. What it prefetches can be based, for example, on links present in an already requested web page, on patterns of recent accesses to web pages and streaming media objects, on user profiles, and on past trends.
摘要:
In brief, the invention provides a method and system for admission control in a grid computing environment. When a user request for a global session is received from a submission node, applications to be launched through the global session are identified, and resource requirements are determined. A execution node is then allocated, and the global session is established between the execution node and the submission node. A user then requests an application session through the established global session, and the application session is established with the execution node.