摘要:
A framework or infrastructure (extensibility framework/infrastructure) for extending the indexing capabilities of an event processing system. The capabilities of an event processing system may be extended to support indexing schemes, including related data types and operations, which are not natively supported by the event processing system. The extensibility is enabled by one or more plug-in extension components called data cartridges.
摘要:
A framework for extending the capabilities of an event processing system using one or more plug-in components referred to herein as data cartridges. In one set of embodiments, the data cartridge framework described herein can enable an event processing system to support one or more extension languages that are distinct from the native event processing language supported by the system. For example, certain “extension language” data cartridges can be provided that enable an event processing system to support complex data types and associated methods/operations that are common in object-oriented languages, but are not common in event processing languages. In these embodiments, an event processing system can access an extension language data cartridge to compile and execute queries that are written using a combination of the system's native event processing language and the extension language.
摘要:
A framework for extending the capabilities of an event processing system using one or more plug-in components referred to herein as data cartridges. In one set of embodiments, the data cartridge framework described herein can enable an event processing system to support one or more extension languages that are distinct from the native event processing language supported by the system. For example, certain “extension language” data cartridges can be provided that enable an event processing system to support complex data types and associated methods/operations that are common in object-oriented languages, but are not common in event processing languages. In these embodiments, an event processing system can access an extension language data cartridge to compile and execute queries that are written using a combination of the system's native event processing language and the extension language.
摘要:
A framework for extending the capabilities of an event processing system using one or more plug-in components referred to herein as data cartridges. Generally speaking, a data cartridge is a self-contained unit of data that can be registered with an event processing system and can store information pertaining to one or more objects (referred to herein as extensible objects) that are not natively supported by the system. Examples of such extensible objects can include data types, functions, indexes, data sources, and others. By interacting with a data cartridge, an event processing system can compile and execute queries that reference extensible objects defined in the data cartridge, thereby extending the system beyond its native capabilities.
摘要:
A framework for extending the capabilities of an event processing system using one or more plug-in components referred to herein as data cartridges. Generally speaking, a data cartridge is a self-contained unit of data that can be registered with an event processing system and can store information pertaining to one or more objects (referred to herein as extensible objects) that are not natively supported by the system. Examples of such extensible objects can include data types, functions, indexes, data sources, and others. By interacting with a data cartridge, an event processing system can compile and execute queries that reference extensible objects defined in the data cartridge, thereby extending the system beyond its native capabilities.
摘要:
A framework or infrastructure (extensibility framework/infrastructure) for extending the indexing capabilities of an event processing system. The capabilities of an event processing system may be extended to support indexing schemes, including related data types and operations, which are not natively supported by the event processing system. The extensibility is enabled by one or more plug-in extension components called data cartridges.
摘要:
One embodiment of the present invention provides a system that automatically sends a notification about a database event. The system operates by receiving a number of items, including a registration of a specified event-type, a subscription of a protocol for the notification, a format for the notification, and a list of recipients for the notification. The system then configures the database to send the notification about the specified event-type to the specified list of recipients in the specified format via the specified protocol. Adding this notification capability at the database-level enhances the functionality and interoperability of many applications as well as providing more robust and timely information to the appropriate audiences.
摘要:
A buffered message queue architecture for managing messages in a database management system is disclosed. A “buffered message queue” refers to a message queue implemented in a volatile memory, such as a RAM. The volatile memory may be a shared volatile memory that is accessible by a plurality of processes. The buffered message queue architecture supports a publish and subscribe communication mechanism, where the message producers and message consumers may be decoupled from and independent of each other. The buffered message queue architecture provides all the functionality of a persistent publish-subscriber messaging system, without ever having to store the messages in persistent storage. The buffered message queue architecture provides better performance and scalability since no persistent operations are needed and no UNDO/REDO logs need to be maintained. Messages published to the buffered message queue are delivered to all eligible subscribers at least once, even in the event of failures, as long as the application is “repeatable.” The buffered message queue architecture also includes management mechanisms for performing buffered message queue cleanup and also for providing unlimited size buffered message queues when limited amounts of shared memory are available. The architecture also includes “zero copy” buffered message queues and provides for transaction-based enqueue of messages.
摘要:
A buffered message queue architecture for managing messages in a database management system is disclosed. A “buffered message queue” refers to a message queue implemented in a volatile memory, such as a RAM. The volatile memory may be a shared volatile memory that is accessible by a plurality of processes. The buffered message queue architecture supports a publish and subscribe communication mechanism, where the message producers and message consumers may be decoupled from and independent of each other. The buffered message queue architecture provides all the functionality of a persistent publish-subscriber messaging system, without ever having to store the messages in persistent storage. The buffered message queue architecture provides better performance and scalability since no persistent operations are needed and no UNDO/REDO logs need to be maintained. Messages published to the buffered message queue are delivered to all eligible subscribers at least once, even in the event of failures, as long as the application is “repeatable.” The buffered message queue architecture also includes management mechanisms for performing buffered message queue cleanup and also for providing unlimited size buffered message queues when limited amounts of shared memory are available. The architecture also includes “zero copy” buffered message queues and provides for transaction-based enqueue of messages.
摘要:
A buffered message queue architecture for managing messages in a database management system is disclosed. A “buffered message queue” refers to a message queue implemented in a volatile memory, such as a RAM. The volatile memory may be a shared volatile memory that is accessible by a plurality of processes. The buffered message queue architecture supports a publish and subscribe communication mechanism, where the message producers and message consumers may be decoupled from and independent of each other. The buffered message queue architecture provides all the functionality of a persistent publish-subscriber messaging system, without ever having to store the messages in persistent storage. The buffered message queue architecture provides better performance and scalability since no persistent operations are needed and no UNDO/REDO logs need to be maintained. Messages published to the buffered message queue are delivered to all eligible subscribers at least once, even in the event of failures, as long as the application is “repeatable.” The buffered message queue architecture also includes management mechanisms for performing buffered message queue cleanup and also for providing unlimited size buffered message queues when limited amounts of shared memory are available. The architecture also includes “zero copy” buffered message queues and provides for transaction-based enqueue of messages.