摘要:
The present invention provides a compilation system for compiling and linking an integrated executable adapted to execute on a heterogeneous parallel processor architecture. The compiler and linker compile different segments of the source code for a first and second processor architecture, and generate appropriate stub functions directed at loading code and data to remote nodes so as to cause them to perform operations described by the transmitted code on the data. The compiler and linker generate stub objects to represent remote execution capability, and stub objects encapsulate the transfers necessary to execute code in such environment.
摘要:
Processes are automatically allocated to processors in a processor array, and corresponding communications resources are assigned at compile time, using information provided by the programmer. The processing tasks in the array are therefore allocated in such a way that the resources required to communicate data between the different processors are guaranteed.
摘要:
Accelerating processing of a non-sequential instruction stream on a processor with multiple compute units by broadcasting to a plurality of compute units a generic instruction stream derived from a sequence of instructions; the generic instruction stream including an index section and a compute section; applying the index section to localized data stored in each compute unit to select one of a plurality of stored local parameter sets; and applying in each compute unit the selected set of parameters to the local data according to the compute section to produce each compute unit's localized solution to the generic instruction.
摘要:
In a processor allocating apparatus employed in a multiprocessor system capable of executing a plurality of tasks in a parallel manner, a compiler compiles a source program of a program constructed of parallel tasks to produce a target program 3, and also to produce a communication amount table for tasks, which holds therein a data amount of communication process operations executed among the respective tasks of the parallel tasks. While referring to both the communication amount table for tasks, and a processor communication cost table for defining data communication time per unit data in sets of all processors employed in the scheduler makes a decision such that a processor where communication time among the tasks becomes minimum is allocated to the task of the parallel tasks, and registers this decision to a processor management table.
摘要:
An incremental method is described for distributing the instructions of an execution sequence among a plurality of processing elements for execution in parallel. The distribution is based upon anticipated availability times of the needed input values for each instruction as well as the anticipated availability times of each processing element for handling each instruction. A self-parallelizing computer system and method are also described for asynchronously processing the distributed instructions in two modes of execution on a set of processing elements which communicate with each other.
摘要:
A plurality of queries (jobs) which consist of sets of tasks with precedence constraints between them are optimally scheduled in two stages of scheduling for processing on a parallel processing system. In a first stage of scheduling, multiple optimum schedules are created for each job, one optimum schedule for each possible number of processors which might be used to execute each job, and an estimated job execution time is determined for each of the optimum schedules created for each job, thereby producing a set of estimated job execution times for each job which are a function of the number of processors used for the job execution. Precedence constraints between tasks in each job are respected in creating all of these optimum schedules. Any known optimum scheduling method for parallel processing tasks that have precedence constraints among tasks may be used but a novel preferred method is also disclosed. The second stage of scheduling utilizes the estimated job execution times determined in the first stage of scheduling to create an overall optimum schedule for the jobs. The second stage of scheduling does not involve precedence constraints because the precedence constraints are between tasks within the same job and not between tasks in separate jobs, so jobs may be scheduled without observing any precedence constraints. Any known optimum scheduling method for the parallel processing of jobs that have no precedence constraints may be used, but a novel preferred method is also disclosed.
摘要:
In a model-based dynamically configured system, various processing components are created dynamically, interfaced to each other, and scheduled upon demand. A combination of data driven and demand-driven scheduling techniques are used to enhance the effectiveness of the dynamically configured system.
摘要:
A multiprogramming data processing system comprises a plurality of data processing devices P1, P2, P3, P4 each having local storage 110-116 and has furthermore an interconnecting standard bus 100. The program is divided in program segments S1-S4, while the program segments are grouped into program portions (k, m, n). The respective program portions are each stored at one of the local memory sections. When an extended branch instruction calls an address in a different program portion, a portion change interrupt signal (26) is generated, whereby dynamical allocation of the execution of program segments may be realized. When a privileged portion (0) is called, the portion change interrupt is nullified, both at the calling to, (28) and the return (23) from the privileged program portion.
摘要:
The invention relates to distributed ledger technologies such as consensus-based blockchains. A blockchain transaction may include digital resources that are encumbered by a locking script that encodes a set of conditions that must be fulfilled before the encumbered resources may be used (e.g., transferring ownership/control of encumbered resources). A worker (e.g., a computer system) performs one or more computations to generate a proof, which is encoded as part of an unlocking script. A verification algorithm may utilize the proof, a verification key, and additional data such as a cryptographic material associated with the worker (e.g., a digital signature) to verify that digital assets of the transaction should be transferred. As a result of the validation of this transaction, any third party is able to check the contract was executed corrected rather than re-executing the contract, thus saving computational power.
摘要:
A vehicle master device includes a rewrite specification data acquisition unit that is configured to acquire rewrite specification data from outside, a rewrite specification data analysis unit that is configured to analyze the rewrite specification data acquired by the rewrite specification data acquisition unit, a group generation unit that is configured to divide the plurality of rewrite target ECUs to generate a plurality of groups based on the rewrite specification data analyzed by the rewrite specification data analysis unit, and an instruction execution unit that is configured to instruct the plurality of rewrite target ECUs for each group of the plurality of groups generated by the group generation unit to perform at least one of installation, rollback, and activation.