Abstract:
Technology is disclosed for limiting the number of central data store accesses required when performing a series of steps, such as a workflow. A local data store is coupled between a central data store and a system carrying out a workflow. Alternatively, a Transfer Engine is coupled between the local data store and the central data store to transfer data between the local data store and central data store. The Transfer Engine allows the data formats in the central data store and local data store to be independent of each other. During a workflow step, the system stores attributes related to the workflow in the local data store—updating modified attribute values and creating entries for newly added attributes. The system determines whether any attributes in the central data store need to be updated with attribute information from the local data store. The system only updates the central data store with local data store attribute values for new and modified attributes when necessary—avoiding central data store updates after every workflow step.
Abstract:
Techniques are described for a memory device. In various embodiments, a scheduler/controller is configured to manage data as it read to or written from a memory. A memory is partitioned into a group of sub-blocks, a parity block is associated with the sub-blocks, and the sub-blocks are accessed to read data as needed. A pending write buffer is added to a group of memory sub-blocks. Such a buffer may be sized to be equal to the group of memory sub-blocks. The pending write buffer handles collisions for write accesses to the same block.
Abstract:
A system and method for managing an application on a home user equipment, preferably a set-top-box of a television, the method includes the steps of: a) dividing the application into at least one separate executable application part, b) determining for each separate executable application part whether to be executed on the home user equipment or on a computational entity, located in the internet, c) transferring application parts determined for execution on the computational entity according to step b) to the computational entity, d) executing transferred application parts on the computational entity, e) returning results of executed application parts to the home user equipment, and f) synchronizing returned results with results of separate application parts executed on the home user equipment.
Abstract:
The invention relates to a method for executing processes, preferably media processes on a worker machine of a distributed computing system, with a plurality of worker machines, comprising the steps of a) Selecting one of the worker machines out of the plurality of worker machines for execution of a process to be executed in the distributed computing system and transferring said process to the selected worker machine, b) Executing the transferred process on the selected worker machine, and c) Removing the executed process from the selected worker machine after finishing of the execution of the process, wherein statistical information of resource usage of the process to be executed on one of the worker machines is collected and that the selection of the worker machine is based on a probability resource usage qualifier, wherein the probability resource usage qualifier is extracted from combined statistical information of the process to be executed and already executed and/or executing processes on the worker machine. The invention relates also to a system and a use.
Abstract:
A method for data transmission to a receiving host, the transmitted data being coded for forward error correction, includes providing a pre-defined set Xk of symbols, having k symbols, at the transmitting host. An individual subset Xnh of the pre-defined set Xk, comprising nh symbols, is provided at each receiving host. An encoded symbol is calculated by the transmitting host based on a pre-defined rateless code. The calculated encoded symbol and the information with which symbols of set Xk is associated is transmitted to each of the receiving hosts. The encoded symbol is decoded by each receiving host using a decoding algorithm based on a pre-defined rateless code. Repeating the steps until each receiving host has retrieved from the received encoded symbols the respective difference set of symbols.
Abstract:
Systems and methods for previewing edited video. In general, in one implementation, a method includes generating a video sequence from a plurality of video segments, identifying an inability to output at least one video segment in the video sequence in substantially real time; and adjusting an output level associated with the at least one video segment to enable the at least one video segment to be output in substantially real time. The output level may include a video quality or a frame rate.
Abstract:
Systems and methods for previewing edited video. In general, in one implementation, a method includes generating a video sequence from a plurality of video segments, identifying an inability to output at least one video segment in the video sequence in substantially real time; and adjusting an output level associated with the at least one video segment to enable the at least one video segment to be output in substantially real time. The output level may include a video quality or a frame rate.
Abstract:
Systems and methods for previewing edited video. In general, in one implementation, a method includes generating a video sequence from a plurality of video segments, identifying an inability to output at least one video segment in the video sequence in substantially real time; and adjusting an output level associated with the at least one video segment to enable the at least one video segment to be output in substantially real time. The output level may include a video quality or a frame rate.