摘要:
A method and system for creating service instances in a computing grid. The method can include scheduling a service in the computing grid to process at least a portion of a requested transaction. At least one additional service related to the scheduled service can be identified, and a load condition can be assessed in the at least one additional service related to the scheduled service. A new instance of the at least one additional service can be created if the load condition exceeds a threshold load. In this way, an enhanced capacity for processing transactions can be established in the related services in advance of a predicted increase in load in the grid.
摘要:
Techniques are disclosed for programmatically allocating memory among competing services in a distributed computing environment. Characteristics of web request streams and formulas for cache hit rates and client response times are used to create an objective function for memory allocation, such that maximum benefit can be realized from the memory allocations. When a particular service is allocated more memory, it can store more of its objects in cache, which improves client response time. Optionally, information from service level agreements may be used as input to the memory allocation computations.
摘要:
A flexible data mirroring system and method are adapted for use in a data processing system having first and second data storage devices. Upon receiving notification of a file update to be written to the first data storage device a mirror mode and mirror event associated with the updated file are determined from mirror information that has been provisioned on a per-file, per-directory or per-volume, etc. basis. The file update is mirrored to the second data storage device according to the provisioned mirror mode and mirror event. If the mirror mode is continuous, the mirror operation proceeds immediately. If the mirror mode is discrete, the file update is noted and the mirror operation proceeds following occurrence of the file's mirror event.
摘要:
A method and system for processing Service Level Agreement (SLA) terms in a caching component in a storage system. The method can include monitoring cache performance for groups of data in the cache, each the group having a corresponding SLA. Overfunded SLAs can be identified according to the monitored cache performance. In consequence, an entry can be evicted from among one of the groups which correspond to an identified one of the overfunded SLAs. In one aspect of the present invention, the most overfunded SLA can be identified, and an entry can be evicted from among the group which corresponds to the most overfunded SLA.
摘要:
Techniques are disclosed for storing document content in a manner which improves efficiency and/or speed of servicing content requests. Expected and/or observed popularity of stored objects is used to determine where a particular object should be physically placed in a distributed computing network. The disclosed techniques may be used for initially placing objects and/or for subsequently placing objects at different and/or additional locations in the network.
摘要:
Techniques are disclosed for improving the serving of large objects (equivalently, large files) in distributed computing networks which include network-attached storage (nullNASnull). Existing features of Hypertext Transfer Protocol (nullHTTPnull) and of Web server implementations are leveraged to achieve performance improvements in a novel way, and thereby greatly facilitate introduction of the present invention into existing networking environments. In particular, objects meeting certain criteria may be served using nullredirect filesnull in which a redirect status code is used to cause content retrieval requests to be automatically redirected from the requesting client device to the NAS, such that the requested content is served from the NAS rather than through a Web server from a Web server farm.
摘要:
The present invention is a grid quorum system, method and apparatus. In a cluster of resources in a computing grid, a resource locking method can include acquiring a temporally limited lock on a grid service in the computing grid. Upon expiration of the temporally limited lock, a renewal of the temporally limited lock can be requested. Subsequently, the temporally limited lock can be renewed if a renewal has been granted by the grid service in response to the request. Notably, the renewing step can include determining whether the cluster has been partitioned into a plurality of sub-clusters. If the cluster has been partitioned, a parent sub-cluster can be identified and the temporally limited lock can be renewed only if a quorum exists in the parent sub-cluster.
摘要:
Under the present invention a biometric reading, an audit point identity and transaction information are collected for each electronic transaction. Upon collection, the biometric reading, audit point identity and transaction information are packaged into an audit packet, which is then encrypted and stored in a log or the like. One or more of the electronic transactions can then be audited using this stored information. Specifically, for the electronic transactions that are to be audited, the corresponding audit packets are retrieved from storage and decrypted. Once decrypted, the biometric readings will be compared to each other to determine whether a set (e.g., one ore more) of the electronic transactions is potentially fraudulent. Typically, a set of electronic transactions is potentially fraudulent if a plurality of the biometric readings are identical or too similar to each other.
摘要:
Under the present invention, the performance of a set of system resources is monitored in response to incoming request traffic. When a system resource is approaching an overload condition, a corrective action is identified and implemented. Overload thresholds for each system resource and appropriate corrective actions are contained within a management policy. Based on a performance history of the corrective actions, the management policy can be changed/revised.
摘要:
The present invention generally relates to a method, system and program product for distributing portal content processing. Specifically, a request for portal content is received on a surrogate system and then passed to a portal system. The portal system will obtain and aggregate a first type of the requested content, and then package the aggregated content into a response. The response will also include place holders that correspond to the remaining type of the requested content. The response will then be transmitted to the surrogate system, which will, based upon the place holders, obtain the remaining type of portal content. Once obtained, the remaining type of portal content will replace the place holders in the response, and the response will be rendered for the requesting portal user.