摘要:
A system, method and computer program are provided for a control server in a client/server environment wherein an API framework facilitates scalable, network transparent, integrated multimedia content loading and data streaming. Concurrent real time content loading and data streaming are possible and techniques are included for admitting new streams only when they can be serviced without negatively affecting current system performance.
摘要:
A method of delivering data in an on-time manner across a communicating environment, such as multimedia data in a network or broadcast environment. The data is transmitted from a data pump at a revised transmission time which is a function of a base transmission time and a delay value. The delay value is calibrated by monitoring one or more processes between the data pump and an associated controller which receives requests from clients. The controller may include an application server which handles the requests, and a control server which processes commands from the application server and provides corresponding control functions to the data pump.
摘要:
A method and implementing computer system is provided including a multimedia server connected in a network configuration with client computer systems. The multimedia server includes various functional units which are selectively operable for delivering and effecting the presentation of multimedia files to the client such that a plurality of multimedia files are seamlessly concatenated on the fly to enable a continuous and uninterrupted presentation to the client. In one example, client selected video files are seamlessly joined together at the server just prior to file delivery from the server. The methodology includes the analog to digital encoding of multimedia segments followed by a commonization processing to ensure that all of the multimedia segments have common operating characteristics. A seamless sequential playlist or dynamically created playlist is assembled from the selected and commonized segments and the resources needed to deliver and play the playlist are reserved in advance to assure resource availability for continuous transmission and execution of the playlist. At a predetermined point prior to an end point of each selected multimedia segment, the next selected segment is initialized and aligned in memory in preparation for a seamless switch to the next segment at the end of a previous segment, thereby providing a seamless flow of data and a continuous presentation of a plurality of selected multimedia files to a client system.
摘要:
A method and implementing apparatus is provided for transferring data from a first device to a second device through a system coupling call. The coupling methodology is implemented to effect a direct coupling between a data producing device and a data receiving device such that data transfers between devices is passed more directly between devices with only minimal copying of the data during the transfer process. The coupling subsystem enables the construction of coupling modules which provide services and afford the opportunity to optimize the transference of data between two devices and permit the dynamic construction of coupling modules to provide coupling service between any pair of devices. In one example, video calls have been created to interface within the new data coupling environment.
摘要:
In order to increase the number of datastreams provided by a multimedia system, a cluster of clusters of multimedia A/V server subsystems is provided. Each cluster in turn is comprised of a plurality of A/V servers, a shared loop architecture plurality of data storage devices interconnected to the A/V servers whereby any storage device is substantially equally accessible by any of the servers in the cluster; and a highly available control server subsystem interconnected to the A/V servers and the data storage devices for controlling the A/V servers and the data storage devices. Each of the clusters is interconnected to a high speed switch for delivery of datastreams from the cluster to the end user. One of the control server subsystems also serves as a master control server assigning a request for a datastream to one of the clusters.
摘要:
The present invention provides for authenticating code and/or data and providing a protected environment for execution. The present invention provides for dynamically partitioning and un-partitioning a local store for the authentication of code or data. The local store is partitioned into an isolated and non-isolated section. Code or data is loaded into the isolated section. The code or data is authenticated in the isolated section of the local store. After authentication, the code is executed. After execution, the memory within the isolated region of the attached processor unit is erased, and the attached processor unit de-partitions the isolated section within the local store.
摘要:
A system and method for speculative assistance to a thread in a heterogeneous processing environment is provided. A first set of instructions is identified in a source code representation (e.g., a source code file) that is suitable for speculative execution. The identified set of instructions are analyzed to determine the processing requirements. Based on the analysis, a processor type is identified that will be used to execute the identified first set of instructions based. The processor type is selected from more than one processor types that are included in the heterogeneous processing environment. The heterogeneous processing environment includes more than one heterogeneous processing cores in a single silicon substrate. The various processing cores can utilize different instruction set architectures (ISAs). An object code representation is then generated for the identified first set of instructions with the object code representation being adapted to execute on the determined type of processor.
摘要:
A computer system's multiple processors are managed as devices. The operating system accesses the multiple processors using processor device modules loaded into the operating system to facilitate a communication between an application requesting access to a processor and the processor. A device-like access is determined for accessing each one of the processors similar to device-like access for other devices in the system such as disk drives, printers, etc. An application seeking access to a processor issues device-oriented instructions for processing data, and in addition, the application provides the processor with the data to be processed. The processor processes the data according to the instructions provided by the application.
摘要:
Managing a computer system's multiple processors as devices. The operating system accesses the multiple processors using processor device modules loaded into the operating system to facilitate a communication between an application requesting access to a processor and the processor. A device-like access is determined for accessing each one of the processors similar to device-like access for other devices in the system such as disk drives, printers, etc. An application seeking access to a processor issues device-oriented instructions for processing data, and in addition, the application provides the processor with the data to be processed. The processor processes the data according to the instructions provided by the application.
摘要:
A system and method are provided to dedicate one or more processors in a multiprocessing system to performing encryption functions. When the system initializes, one of the synergistic processing unit (SPU) processors is configured to run in a secure mode wherein the local memory included with the dedicated SPU is not shared with the other processors. One or more encryption keys are stored in the local memory during initialization. During initialization, the SPUs receive nonvolatile data, such as the encryption keys, from nonvolatile register space. This information is made available to the SPU during initialization before the SPUs local storage might be mapped to a common memory map. In one embodiment, the mapping is performed by another processing unit (PU) that maps the shared SPUs' local storage to a common memory map.