摘要:
An in-memory data grid (IMDG) server includes an i/o interface for transmitting and receiving data over a network. A random access memory (RAM) is configured with the IMDG which stores plural different data sets that may be requested for retrieval by applications that can connect to the IMDG server through the network. A processor receives requests for retrieval of data from the IMDG data sets, and sends the requested data to the requesting application. The IMDG server also includes a request prioritizer that determines, when two or more applications, which are competing applications that defined as having requests co-pending at the IMDG server to retrieve one or more of the data sets, which of the competing applications has the highest priority relative to the other competing applications. The request prioritizer causes a data set requested by the application with the highest priority to be handled before requests from the applications.
摘要:
Certain example embodiments relate to memory management techniques that enable users to “pin” elements to particular storage tiers (e.g., RAM, SSD, HDD, tape, or the like). Once pinned, elements are not moved from tier-to-tier during application execution. A memory manager, working with at least one processor, receives requests to store and retrieve data during application execution. Each request is handled using a non-transitory computer readable storage medium (rather than a transitory computer readable storage medium), if the associated data is part of a data cache that is pinned to the non-transitory computer readable storage medium, or if the associated data itself is pinned to the non-transitory computer readable storage medium. If neither condition applies, the memory manager determines which one of the non-transitory and the transitory computer readable storage mediums should be used in handling the respective received request, and handles the request accordingly.
摘要:
Certain example embodiments described herein relate to techniques for processing XML documents of potentially very large sizes. For instance, certain example embodiments parse a potentially large XML document, store the parsed data and some associated metadata in multiple independent blocks or partitions, and instantiate only the particular object model object requested by a program. By including logical references rather than physical memory addresses in such pre-parsed partitions, certain example embodiments make it possible to move the partitions through a caching storage hierarchy without necessarily having to adjust or encode memory references, thereby advantageously enabling dynamic usage of the created partitions and making it possible to cache an arbitrarily large document while consuming a limited amount of program memory. Such techniques may be extended to enable atomic updates to be processed efficiently, e.g., by maintaining commit level information in a partition list and optionally implementing document shadowing.
摘要:
A computer system, a computer-readable non-transitory medium, and/or a computer-implemented method generates analytics applicable to data of an undetermined structure and type. A processor device receives data formatted in an undetermined structure. The processor device discovers, in a cross filter model processor, dynamically in response to receiving the data in the undetermined structure, a structure and a data type of the data which was received in the undetermined structure. The processor device determines, in response to the structure and the data type of the data discovered by the cross filter model processor, which of a plurality of analytic queries are applicable to the data.
摘要:
Certain example embodiments provide efficient policy-based access to data stored in memory tiers, including volatile local in-process (L1) cache memory of an application and at least one managed (e.g., non-volatile) in-memory (L2) cache. Operations include receiving an access request for access to a data element in L2; detecting whether a copy of the data element is in L1; if so, copying the data element and the access policy from L2 to L1 and providing the user with access to the copy of data element from L1 if the access policy allows access to the user; and if not, determining, by referring to a copy of the access policy stored in L1, whether the user is allowed to access the data element, and, if the user is allowed to access the data element, providing the user with access to the copy of the data element from the L1 cache memory.
摘要:
Certain example embodiments described herein relate to techniques for dynamically selecting rule processing modes. The processing mode does not need to be specified during rule design/authoring. Two sets of artifacts may be generated to support a desired processing mode. This may occur in the designer's local workspace, e.g., so that rule invocation can be tested locally. Additionally, or alternatively, both sets of artifacts may be installed on the rule engine running on a remote server when the project is deployed. The designer need not be aware that both sets of artifacts are being generated. In certain example embodiments, the designer may have the ability to sequence rules within metaphors (or decision entities such as decision tables), and/or the ability to sequence metaphors within rule sets. During rule invocation, a parameter may be provided to indicate the processing mode (e.g., sequential or inferential) to be used by the rule engine.
摘要:
Certain example embodiments provide a generic integration framework for connecting on-premises applications with software as a service (SaaS) applications, and/or for integrating the same. The framework of certain example embodiments involves a layered approach (including a Connector Development Kit, connection factory, metadata handlers, and connector services) that helps to, among other things, allow customization of applications in multi-tenant architectures. Design-time wizards help create runtime artifacts and, during runtime, the connector service helps serve as an intermediary between the on-premises application and the cloud service, thereby hiding the complexity of the specific cloud providers. Certain example embodiments advantageously provide a generic and well-integrated solution for connecting an on-premises application to a cloud service in connection with existing containers.
摘要:
Certain example embodiments described herein relate to techniques for processing XML documents of potentially very large sizes. For instance, certain example embodiments parse a potentially large XML document, store the parsed data and some associated metadata in multiple independent blocks or partitions, and instantiate only the particular object model object requested by a program. By including logical references rather than physical memory addresses in such pre-parsed partitions, certain example embodiments make it possible to move the partitions through a caching storage hierarchy without necessarily having to adjust or encode memory references, thereby advantageously enabling dynamic usage of the created partitions and making it possible to cache an arbitrarily large document while consuming a limited amount of program memory.
摘要:
The example embodiments disclosed herein relate to application integration techniques and, more particularly, to application integration techniques built around the publish-and-subscribe model (or one of its variants). In certain example embodiments, a publishing application, and first and second broker clusters are provided. Each broker cluster comprises a plurality of brokers, and each broker is configured to relay messages from the publishing application to at least one subscribing application. A composite cluster connection is associated with the publishing application, and cluster connections are associated with the composite cluster connection. The message generated by the publishing application is sent to the broker cluster in accordance with a user-defined composite policy. The message is routed from the composite cluster connection to at least one cluster connection based on a first policy layer. The messaging is routed from the at least one cluster to at least one broker based on a second policy layer.
摘要:
Certain example embodiments relate to techniques for detecting anomalies in streaming data. More particularly, certain example embodiments use an approach that combines both unsupervised and supervised machine learning techniques to create a shared anomaly detection model in connection with a modified k-means clustering algorithm and advantageously also enables concept drift to be taken into account. The number of clusters k need not be known in advance, and it may vary over time. Models are continually trainable as a result of the dynamic reception of data over an unknown and potentially indefinite time period, and clusters can be built incrementally and in connection with an updatable distance threshold that indicates when a new cluster is to be created. Distance thresholds also are dynamic and adjustable over time.