Abstract:
Embodiments of the approaches disclosed herein include a subsystem that includes an access tracking mechanism configured to monitor access operations directed to a first memory and a second memory. The access tracking mechanism detects an access operation generated by a processor for accessing a first memory page residing on the second memory. The access tracking mechanism further determines that the first memory page is included in a first subset of memory pages residing on the second memory. The access tracking mechanism further locates, within a reference vector, a reference bit that corresponds to the first memory page, and sets the reference bit. One advantage of the present invention is that memory pages in a hybrid system migrate as needed to increase overall memory performance.
Abstract:
A computer program product for automatically gauging a benefit of a tuning action. The computer program product including a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code including computer readable program code configured to collect a plurality of observations of a running state of a plurality of threads in a computer system. Computer readable program code configured to identify a plurality of resources of the computer system and a capacity of each resource of the plurality of resources. Computer readable program code configured to map an observation of the running state of each thread of the plurality of threads to a resource that the observation of each thread uses, respectively, and computer readable program code configured to apply the tuning action to a first resource of the plurality of resources to determine an impact on the performance of the computer system.
Abstract:
A technology is described for a software container recommendation service. An example method may include collecting utilization metrics for an application hosted on a computing instance. The utilization metrics may be a measure of computing resources used by the application. The utilization metrics may be analyzed to determine a level of computing resources for the computing instance used by the application. A software container configuration for the application may be determined based at least in part on the utilization metrics when analysis of the utilization metrics indicates an underutilization of computing resources by the application. The specifications of the software container configuration may then be provided to a customer.
Abstract:
Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques.
Abstract:
A non-transitory, computer readable, storage medium storing a program of instructions executable by a machine to perform a method of gauging a benefit of a tuning action, the method includes acquiring a set of time series data sampled from an environment of an application, using a processor.
Abstract:
The sizing of virtual machines is optimized based on projected performance metrics. All virtual machine configuration resources are normalized by a processing device. The normalized resources for the virtual machine configurations are then stored in a catalogue. An application is then profiled to obtain resource demand estimates for each virtual machine configuration and a base performance is calculated for the application. The base performance is used to predict performance estimates on all virtual machine configurations in the catalogue. Accordingly, a virtual machine configuration having a lowest response time is selected.
Abstract:
Embodiments of the present invention provide systems, methods, and computer program products for configuring auto-scaling parameters of a computing environment, as well as alerting a user when auto-scaling operations are not attainable given current operating configurations.
Abstract:
Performance thresholds are defined for operators in a flow graph for a streaming application. A streams manager deploys the flow graph to one or more virtual machines (VMs). The performance of each portion of the flow graph on each VM is monitored. A VM is selected. When the performance of the portion of the flow graph in the selected VM does not satisfy the defined performance threshold(s), a determination is made regarding whether the portion of the flow graph is underperforming or overperforming. When the portion of the flow graph is underperforming, the portion of the flow graph is split into multiple portions that are implemented on multiple VMs. When the portion of the flow graph is overperforming, a determination is made of whether a neighbor VM is also overperforming. When a neighbor VM is also overperforming, the two VMs may be coalesced into a single VM.
Abstract:
According to one aspect of the present disclosure, a method and technique for capacity forecasting is disclosed. The method includes: storing, in a memory, resource data associated with an environment, the resource data comprising inventory information of applications, processing resources and storage resources of the environment; and providing a ledger module executable by a processor unit to: create a capacity-associated transaction; identify and link at least one of an application, processing resource and storage resource to the transaction from the resource data; determine an initiation time and duration associated with the transaction; and forecast a change in capacity of at least one linked storage resource for the transaction and a time of the change in capacity.
Abstract:
Methods and arrangements for characterizing software base-station workloads. Input system parameters are mapped to work-determining parameters which act to determine computational requirements of a dynamic workload. A synthetic experiment is undertaken to measure the computational requirements determined by the work-determining parameters.