Abstract:
A method and system for determining a device identifier assigned to a device within an installation of devices connected via a network is provided. A system determines the device identifier of a device that has been repaired and reinstalled so that the device can be placed in service. Upon receiving an indication that a repaired device has been reinstalled, the system requests and receives a possible device identifier of the repaired device from an interconnect device that connects the repaired device to the network. To verify that the possible device identifier is the actual device identifier, the system directs the repaired device to reboot so that it broadcasts its device identifier. When the repaired device reboots, it broadcasts its device identifier. Upon receiving the broadcast device identifier, the system verifies that the possible device identifier is the same as the broadcast device identifier.
Abstract:
A power control system for saving power by powering on enough application servers to satisfy the current load workload as well as any required reserve capacity based on administrative settings is disclosed. As the load increases, more servers are powered on. As the load decreases some servers are powered off. The power control system provides a reasonable end user experience at the least cost based on power consumption of the servers.
Abstract:
An input-output (IO) virtualization system connectable to a network is disclosed. The system can include a second virtual switch connected to a memory bus and configured to receive network packets from a first virtual switch, and an offload processor module supporting the second virtual switch, the offload processor module further comprising at least one offload processor configured to modify network packets and direct the modified network packets to the first virtual switch through the memory bus.
Abstract:
A system is disclosed that includes at least one processor module having an in-line module connector configured to physically connect the processor module to at least one in-line memory slot of a system memory bus; and at least one integrated circuit device (IC) mounted on the module. The IC includes at least one offload processor comprising a central processing unit and cache memory, at least one context memory coupled to the offload processor, and logic coupled to the offload processor and context memory and configured to detect predetermined write operations over the system memory bus.
Abstract:
Cloud computing platforms having computer-readable media that perform methods to manage virtual hard drives as blobs are provided. The cloud computing platform includes fabric computers and blob stores. The fabric computers execute virtual machines that implement one or more applications that access virtual hard drives. The data in the virtual hard drives is accessed, via a blob interface, from blobs in the blob stores. The blob stores interface with a driver that translates some application input/output (I/O) requests destined to the virtual hard drives to blob commands when accessing data in the virtual hard drives.
Abstract:
Cloud computing platforms having computer-readable media that perform methods to manage virtual hard drives as blobs are provided. The cloud computing platform includes fabric computers and blob stores. The fabric computers execute virtual machines that implement one or more applications that access virtual hard drives. The data in the virtual hard drives is accessed, via a blob interface, from blobs in the blob stores. The blob stores interface with a driver that translates some application input/output (I/O) requests destined to the virtual hard drives to blob commands when accessing data in the virtual hard drives.
Abstract:
"An affordable, highly trustworthy, survivable and available, operationally efficient distributed supercomputing infrastructure for processing, sharing and protecting both structured and unstructured information " A primary objective of the SHADOWS infrastructure is to establish a highly survivable, essentially maintenance-free shared platform for extremely high-performance computing (ι e, supercornputing ) - with "high performance" define both in terms of total throughput, but also in terms of very low- latency (although not every problem or customer necessarily requires very low latency) - while achieving unprecedented levels of affordability At its simplest, the idea is to use distributed "teams" of nodes in a self-healing network as the basis for managing and coordinating both the work to be accomplished and the resources available to do the work The SHADOWS concept of "teams" is responsible for its ability to "self-heal" and "adapt" its distributed resources in an "organic" manner Furthermore, the "teams" themselves are at the heart of decision-making, processing, and storage in the SHADOWS infrastructure Everything that's important is handled under the auspices and stewardship of a team.
Abstract:
A system and method of dynamically controlling a reservation of compute resources within a compute environment is disclosed. The method aspect of the invention comprises receiving a request from a requestor for a reservation of resources within the compute environment, reserving a first group of resources, evaluating resources within the compute environment to determine if a more efficient use of the compute environment is available and if a more efficient use of the compute environment is available, then canceling the reservation for the first group of resources and reserving a second group of resources of the compute environment according to the evaluation.
Abstract:
A system, a computer-readable medium and method for performing intelligent data pre-staging for a job submitted to a cluster environment The method aspect comprises determining availability of compute resources including availability timeframes to process the submitted job, determining data requirements for processing the job and determining a co- allocation in time reservation.