摘要:
A cooling device for a heat source, such as an electronic component, has a single or set of nano- and/or micro-sized channel(s) connected to a single or multiple reservoir(s). The heat source causes nucleation within a channel, and a vapor bubble forms removing heat from the heat source via evaporation of liquid to vapor in the bubble and condensation of the generated vapor at the cooler ends of the bubble. Thus, the channel operates as a passive heat pipe and removes heat from the source by passively circulating the cooling fluid between the vapor bubble and the reservoir(s).
摘要:
Illustrative embodiments include a system, and computer program product for creating a virtual machine using a preprovisioned mutated template. A template to use for creating the virtual machine is identified, a template including data usable to create the virtual machine on a data processing system. A block of data is selected in the mutated template for reconstructing the template from the mutated template. The block of data is included in the mutated template at a location specified in a manifest associated with the mutated template. A data structure of the template is populated with the block of data such that the block of data occupies a predetermined position in the template, thereby reconstructing the template from the mutated template. The virtual machine is created on the data processing system using the template.
摘要:
A method of provisioning in a cloud compute environment having a set of cloud hosts associated with one another. The method begins by forming a distributed, cooperative cache across the set of cloud hosts by declaring a portion of a data store associated with a cloud host as a cache, and storing template images and patches in the cache. Caching activity across the distributed, cooperated cache is coordinated by having the caches share information about their respective contents. A control routine at a cache receives requests for template images or patches, responds to the requests if the requested artifacts are available or, upon a cache miss, forwards the request to another one of the caches. Periodically, the composition of the distributed, cooperative cache is computed, and the template images and patches are populated into the caches using the computed cache composition.
摘要:
A first provider edge (PE) device is configured to: receive a Label Distribution Protocol (LDP) MAC Flush message from a PE device via an input port; flush a routing table in response to the LDP MAC Flush message; determine whether the LDP MAC Flush message comprises a PE identifier corresponding to the PE device; generate a Topology Change Notification (TCN) message based on the LDP MAC Flush message when the LDP MAC Flush message comprises the PE identifier corresponding to the PE device; and output the TCN message.
摘要翻译:第一提供商边缘(PE)设备被配置为:经由输入端口从PE设备接收标签分发协议(LDP)MAC刷新消息; 刷新路由表以响应LDP MAC Flush消息; 确定LDP MAC Flush消息是否包括与PE设备相对应的PE标识符; 当LDP MAC Flush消息包括与PE设备对应的PE标识符时,基于LDP MAC Flush消息生成拓扑变化通知(TCN)消息; 并输出TCN消息。
摘要:
At a client, a video is received. The video includes one or more advertisement slots. The video is played back to a user. During the playback of the video, an impending advertisement slot is detected. One or more advertisements are requested for placement in the advertisement slot. The one or more advertisements are received and placed in the advertisement slot.
摘要:
A method of provisioning in a cloud compute environment having a set of cloud hosts associated with one another. The method begins by forming a distributed, cooperative cache across the set of cloud hosts by declaring a portion of a data store associated with a cloud host as a cache, and storing template images and patches in the cache. Caching activity across the distributed, cooperated cache is coordinated by having the caches share information about their respective contents. A control routine at a cache receives requests for template images or patches, responds to the requests if the requested artifacts are available or, upon a cache miss, forwards the request to another one of the caches. Periodically, the composition of the distributed, cooperative cache is computed, and the template images and patches are populated into the caches using the computed cache composition.
摘要:
Techniques for creating a virtual machine super template to create a user-requested virtual machine template. A method includes identifying at least one virtual machine super template to be created via analyzing at least one existing template in a repository and/or a user-defined combination of software, creating the super template by installing software requested by the user to be within the super template, and creating a user-requested virtual machine template by un-installing software from the super template that is not required in the user-requested template and/or adding software to the super template required in the user-requested template that is not present in the super template.
摘要:
According to one aspect of the present disclosure, a system and technique for preprovisioning virtual machines is disclosed. The system includes a processing system configured to receiving requests for network computing resources and having a virtual machine (VM) manager configured to: analyze the requests and identify each different virtual machine configuration, each VM configuration having a plurality of configuration attributes; determine a request frequency corresponding to each requested VM configuration; determine a configuration of each provisioned VM on the network; responsive to determining the configuration of each provisioned VM, predict a configuration for a preprovisioned VM likely to be requested based on the frequency of the requested VM configurations and the configurations of the provisioned VMs; and create the preprovisioned VM on the network.
摘要:
A system and associated method for automatically pipeline parallelizing a nested loop in sequential code over a predefined number of threads. Pursuant to task dependencies of the nested loop, each subloop of the nested loop are allocated to a respective thread. Combinations of stage partitions executing the nested loop are configured for parallel execution of a subloop where permitted. For each combination of stage partitions, a respective bottleneck is calculated and a combination with a minimum bottleneck is selected for parallelization.
摘要:
A framework instantiates an application from its disk snapshots. The disk snapshots are taken from a different network environment and migrated to a virtualized environment. Modifications to operating systems and hypervisors are avoided, and no special network isolation support is required. The framework is extensible and plug-in based, allowing product experts to provide knowledge about discovering, updating, starting and stopping of software components. This knowledge base is compiled into a plan that executes various interleaved configuration discovery, updates and start tasks such that a required configuration model can be discovered with minimal start and update task execution. The plan generation automatically stitches together knowledge for the various products, thus significantly simplifying the knowledge specification. Once discovery is complete, the framework utilizes the discovered model to update stale network configurations across software stack and customize configurations beyond network settings.