摘要:
A system and method can handle storage events in a distributed data grid. The distributed data grid cluster includes a plurality of cluster nodes storing data partitions distributed throughout the cluster, each cluster node being responsible for a set of partitions. A service thread, executing on at least one of said cluster nodes in the distributed data grid, is responsible for handling one or more storage events. The service thread can use a worker thread to accomplish synchronous event handling without blocking the service thread.
摘要:
A server-side event model provides a general-purpose event framework which simplifies the server-side programming model in a distributed data grid storing data partitions distributed throughout a cluster of nodes. A system provides event interceptors which handle events associated with operations and maps the event interceptors to event dispatchers placed in the cluster. Advantageously, the system supports handling critical path events without the need for interactions from the client-side thereby avoiding unnecessary delays waiting for client responses. Additionally, the system can defer completion of an operation in the distributed data grid pending completion of event handling by an event interceptor. The system enables the data grid to employ more types of events and define different event interceptors for handling the events while avoiding client interaction overhead.
摘要:
A processing pattern is described for dispatching and executing tasks in a distributed computing grid, such as a cluster network. The grid includes a plurality of computer nodes that store a set of data and perform operations on that data. The grid provides an interface that allows clients to submit tasks to the cluster for processing. The interface can be used to establish a session between the client and the cluster, which will be used to submit a task for processing by the plurality of computer nodes of the cluster. A dispatcher receives a submission of the task over the interface and routes the task to at least one node in the cluster that is designated to process the task. A task processor then processes the task on the designated node(s), generates a submission outcome and indicates to the client that the submission outcome is available.
摘要:
Push replication techniques are described for use in an in-memory data grid. When applications on a cluster perform insert, update or delete operations in the cache, a push replication provider asynchronously pushes updates from the source cluster to one or more remote destination clusters. The push replication provider includes a pluggable internal transport to send the updates to the destination cluster. This pluggable transport can be switched to employ a different communication service or protocol. A publishing transformer can chain multiple filters and apply filters on a stream of updates from source cluster to the destination cluster. A batch publisher can be used to receive batches multiple updates and replicate those batch to the destination cluster. XML based configuration can be provided to configure the push replication techniques on a cluster. A number of cluster topologies can be utilized, including active/passive, active/active, multi-site active/passive, multi-site active/active and centralized replication arrangement.
摘要:
An event distribution pattern is described for use with a distributed data grid. The grid can be comprised of a cluster of computer devices having a cache for storing data entries. An event distributor residing on at least one of those computer devices provides a domain for sending events to a desired end point destination and also provides the store and forward semantics for ensuring asynchronous delivery of those events. An event channel controller resides as an entry in the cache on at least one of computers in the cluster. This event channel controller receives the events defined by said application from the event distributor and provides the events to a set of event channels. Each event channel controller can include multiple event channel implementations for distributing the events to different destinations. The destinations can include local caches, remote caches, standard streams, files and JMS components
摘要:
A live object pattern is described that enables a distributed cache to store live objects as data entries thereon. A live object is a data entry stored in the distributed cache which represents a particular function or responsibility. When a live object arrives to the cache on a particular cluster server, a set of interfaces are called back which inform the live object that it has arrived at that server and that it should initiate to perform its functions. A live object is thus different from “dead” data entries because a live object performs a set of function, can be started/stopped and can interact with other live objects in the distributed cache. Because live objects are backed up across the cluster just like normal data entries, the functional components of the system are more highly available and are easily transferred to another server's cache in case of failures.
摘要:
A live object pattern is described that enables a distributed cache to store live objects as data entries thereon. A live object is a data entry stored in the distributed cache which represents a particular function or responsibility. When a live object arrives to the cache on a particular cluster server, a set of interfaces are called back which inform the live object that it has arrived at that server and that it should initiate to perform its functions. A live object is thus different from “dead” data entries because a live object performs a set of function, can be started/stopped and can interact with other live objects in the distributed cache. Because live objects are backed up across the cluster just like normal data entries, the functional components of the system are more highly available and are easily transferred to another server's cache in case of failures.
摘要:
A system and method can provide a server-side event model in a distributed data grid with a plurality of cluster nodes storing data partitions distributed throughout the cluster, each cluster node being responsible for a set of partitions. The system can map one or more event interceptors to an event dispatcher placed in the cluster. The one or more event interceptors can handle at least one event dispatched from the event dispatcher, wherein the at least one event is associated with an operation in the distributed data grid. The system can defer completion of the operation in the distributed data grid pending completion of the handling of the at least one event by said one or more event interceptors.
摘要:
A system and method can handle storage events in a distributed data grid. The distributed data grid cluster includes a plurality of cluster nodes storing data partitions distributed throughout the cluster, each cluster node being responsible for a set of partitions. A service thread, executing on at least one of said cluster nodes in the distributed data grid, is responsible for handling one or more storage events. The service thread can use a worker thread to accomplish synchronous event handling without blocking the service thread.
摘要:
A processing pattern is described for dispatching and executing tasks in a distributed computing grid, such as a cluster network. The grid includes a plurality of computer nodes that store a set of data and perform operations on that data. The grid provides an interface that allows clients to submit tasks to the cluster for processing. The interface can be used to establish a session between the client and the cluster, which will be used to submit a task for processing by the plurality of computer nodes of the cluster. A dispatcher receives a submission of the task over the interface and routes the task to at least one node in the cluster that is designated to process the task. A task processor then processes the task on the designated node(s), generates a submission outcome and indicates to the client that the submission outcome is available.