摘要:
A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a converter wherein a system emulation/virtualization converter and an application code converter implement a system emulation process, and wherein the system converter implements a system and application conversion process for executing code from a guest image, wherein the system converter or the system emulator. The system further includes a reordering process through JIT (just in time) optimization that ensures loads do not dispatch ahead of other loads that are to the same address, wherein a load will check for a same address of subsequent loads from a same thread, and a thread checking process that enable other thread store checks against the entire load queue and a monitor extension.
摘要:
A computer-implemented system and method facilitate dynamically allocating server resources. The system and method include determining a current queue distribution, referencing historical information associated with execution of at least one task, and predicting, based on the current queue distribution and the historical information, a total number of tasks of various task types that are to be executed during the time period in the future. Based on this prediction, a resource manager determines a number of servers that should be instantiated for use during the time period in the future.
摘要:
An approach is provided for providing load balancing in multi-level distributed computations. A distributed computation control platform determines closure capability data associated with respective levels of a computational architecture, wherein the respective levels include, at least in part, a device level, an infrastructure level, and a cloud computing level. The distributed computation control platform also determines functional flow information of the respective levels, one or more nodes of the respective levels, or a combination thereof with respect to at least one set of one or more computation closures. The distributed computation control platform further determines to cause, at least in part, processing at least the closure capability data, the functional flow information, or a combination thereof to determine: (a) a distribution of the one or more computation closures among the respective levels, (b) the one or more nodes, or (c) a combination thereof.
摘要:
Embodiments of the present invention provide a resource processing method, an operating system, and a device. The method is applied to a multi-core operating system, where the multi-core operating system includes a management operating system and multiple load operating systems that run on a host machine and includes a physical resource pool. The method includes: allocating, by the management operating system to each load operating system, a physical resource set exclusively used by each load operating system; constructing a startup mirror for each load operating system; setting, for each load operating system, a mapping relationship that is from a virtual memory address to a physical memory address and that is required for executing the startup mirror; determining, in processor cores allocated to a first load operating system, a startup processor core that starts up the first load operating system; instructing the startup processor core to read a mapping relationship that is from a virtual memory address to a physical memory address and that is required for executing a startup mirror of the first load operating system; and instructing the startup processor core to execute the startup mirror pre-constructed for the first load operating system.
摘要:
Embodiments are directed to determining an optimal number of concurrently running cloud resource instances and to providing an interactive interface that shows projected operational metric measurements. In one scenario, a computer system accesses metric information which identifies operational metric measurements, and further accesses a second portion of metric information that identifies operational metric measurements for the cloud resource instances over a second period of time. The computer system then calculates projected operational metric measurements based on the identified operational metric measurements over the first period of time (e.g. for reactive tuning) and further based on the identified operational metric measurements over the second period of time (e.g. for predictive tuning). The computer system then determines, based on the projected operational metric measurements, a number of cloud resource instances that are to be concurrently running at a specified future point in time.
摘要:
Example resource management systems and methods are described. In one implementation, a resource manager is configured to manage data processing tasks associated with multiple data elements. An execution platform is coupled to the resource manager and includes multiple execution nodes configured to store data retrieved from multiple remote storage devices. Each execution node includes a cache and a processor, where the cache and processor are independent of the remote storage devices. A metadata manager is configured to access metadata associated with at least a portion of the multiple data elements.
摘要:
Example resource management systems and methods are described. In one implementation, a resource manager is configured to manage data processing tasks associated with multiple data elements. An execution platform is coupled to the resource manager and includes multiple execution nodes configured to store data retrieved from multiple remote storage devices. Each execution node includes a cache and a processor, where the cache and processor are independent of the remote storage devices. A metadata manager is configured to access metadata associated with at least a portion of the multiple data elements.