Abstract:
Provided are computer-implemented methods and systems for performing media resource storage and management. The computer-implemented method and system implemented as a request manager is capable of monitoring requests for media resources in a content delivery network. For each monitored request, the request manager determines whether to generate a multifile for the requested media resource. For example, the request manager can first determine whether the media resource is eligible for multifile generation. If eligible, the request manager then determines whether the media resource has reached a popularity threshold. If the media resource has reached the popularity threshold, the request manager initiates generation of the multifile for the requested media resource. Generally, the generated multifile is stored in a storage system associated with the content delivery network.
Abstract:
A method for transcoding a file on a distributed file system is described. The distributed file system stores portions of the file across a plurality of distinct physical storage locations. A request to transcode the file is received. The file is transcoded from a first format to a second format using a processing unit of at least one of the physical storage locations.
Abstract:
This method is for processing data frames (Fi) received over a communication channel, each data frame (Fi) comprising a data section (Si) for forming part of a data table and location information (Mi) associated with the data section and designating a location of said section within the data table. The data table and a metadata table (MT) are built and stored as the data frames are received, to be made available to a host processor (2). The method comprises the following steps for each received data frame: - buffering (10, 12) the data section of the received frame and the associated location information; - determining (14) an address for the data section in a table memory (8) based on the location information; - writing the data section at the determined address into the table memory; and - writing an entry of a metadata table, wherein said entry comprises the location information.
Abstract:
A data stream recorder system (11) for mufti-stream recording and retrieval utilizes a number of gateways (23, 24, 25), each for sending and receiving packets containing streaming multimedia content data at real-time rates via a packet data network. A session manager communicates via the network with source client devices and receiver client devices to establish and control recording and retrieval sessions. The manager assigns sessions to the gateways for the sending and receiving of the packets to and from client devices. Content is distributed across storage devices associated in storage nodes (27, 28, 29). Each of the gateways (23, 24, 25) receives packets containing content data at real time rates during a recording session and distributes the received packets from the session across all of the storage nodes (27, 28, 29). A scheduler of each respective storage node distributes content data from packets distributed to the respective storage node, across all of the digital storage devices of the respective storage node.
Abstract:
The invention relates to a memory system, in particular for network broadcasting applications such as video/audio applications. Said system comprises at least one memory (PM, PM1 to PMi), which is subdivided into several addressable memory units (M1 to Mx), each with its own output (A1 to Ax) for exchanging data. Each input (I1 to Ix) of a matrix switch is connected to a respective output (A1 to Ax) of a different memory unit. The matrix switch is operated in such a way that several of the memory units (M1 to Mx) are connected in sequential order to its outputs (OP1 to OPy), whereby a first sequence of memory units and a second sequence of memory units are connected independently to its outputs. The invention thus provides a memory system, which can service a number of requests to the same server in a deferred manner, whereby the interaction of the individual memory units (M1 to Mx) and the matrix switch (MS) enables a higher data throughput and a short access period.
Abstract:
Requests are received for storing/retrieving and storing data from/to a plurality of storage devices (100). A processor (300) is designed for handling each request, based, e.g., upon the load of each processor. A request for retrieving data is forwarded directly from the designated processor to the storage device via a switch, (250). Responses from the storage devices are routed directly to the designated processor via the switch (250). The switch (250) independently routes the request for retrieving data and the responses between the storage devices (100) and the processor, based on information obtained by the processor. Data provided by a designated processor is stored on the storage devices (100) via a switch (250). The switch (250) independently routes the data to be stored directly from the designated processor to the storage devices (100), based on information created by the processor. Requests and responses are exchanged between the switch (250) and the storage devices (100) via at least one high-speed network channel.
Abstract:
A system and method is described for stream caching content that is being provided to one or more users over a network. The streaming content may be split into a plurality of file sections (F1, F30), which may then be further split into a number of subfiles (S1, S4). These subfiles may then be streamed to a plurality of users over a network. In one embodiment, file sections may be cached on a common caching server and accessed by subsequent users without having to wait until the entire requested file has been streamed to a preceding user.
Abstract:
Multi-media servers (10) provide clients (12) with streaming data requiring soft real-time guarantee and static data requiring a large amount of storage space. The servers (10) use a pull-mode protocol to communicate with clients through a real-time network. Separate data and control channels enhance the soft real-time capability of the server. The data channel (23) conforms to open standard protocol. A switched data link layer for the control channel (25) permits separate intrahost control messages that may be multicast and broadcast. The distributed file system (26) selects a specific data block size based upon the compression technique employed to enhance soft real-time guarantee. A hierarchical data structure combined with merging empty data blocks minimizes disk fragmentation. Data blocks are striped across multiple disks to improve disk utilization. A local buffer and a queue for both read and write requests provide support for simultaneous read and write data streams.
Abstract:
Techniques are described herein to dynamically control a number of hybrid automatic repeat requests (HARQ) transmissions based at least in part on flow characteristics of a data flow. Data included in a transport block may be grouped into one or more data flows based on a variety of factors. A set of performance benchmarks may be associated with each data flow. Flow characteristics for each data flow may be measured. A network entity may determine a number of HARQ transmissions to be transmitted during a HARQ procedure based on measured flow characteristics satisfying the performance benchmarks. For example, if a performance benchmark is not satisfied by its associated flow characteristic, the network entity may request additional HARQ transmissions during the HARQ procedure.