摘要:
An approach for reducing transport of messages between nodes of a multi-node system is presented wherein a message queue is associated with a queue service, and based on which node the message queue resides, one of the nodes is registered as hosting the associated queue service. In response to a client attempting to connect and requesting a particular queue service, the client is caused to connect to the node on which the queue service resides.
摘要:
Techniques are provided for maintaining high propagation availability for non-persistent messages. Destination-to-instance mapping information is provided to a listener process for a cluster database. The destination-to-instance mapping indicates the current owner instance of each single-instance destination within the cluster database. To establish a connection to a single-instance destination, a sending process sends a connection request to the global listener. The connection request identifies the desired destination queue, but not the owner instance of the queue. The global listener for the cluster database uses the destination-to-instance mapping to determine which instance is the current owner of the specified queue, and establishes a connection between the sending process and the appropriate owner instance.
摘要:
An approach for reducing transport of messages between nodes of a multi-node system is presented wherein a message queue is associated with a queue service, and based on which node the message queue resides, one of the nodes is registered as hosting the associated queue service. In response to a client attempting to connect and requesting a particular queue service, the client is caused to connect to the node on which the queue service resides.
摘要:
Techniques are provided for maintaining high propagation availability for non-persistent messages. Destination-to-instance mapping information is provided to a listener process for a cluster database. The destination-to-instance mapping indicates the current owner instance of each single-instance destination within the cluster database. To establish a connection to a single-instance destination, a sending process sends a connection request to the global listener. The connection request identifies the desired destination queue, but not the owner instance of the queue. The global listener for the cluster database uses the destination-to-instance mapping to determine which instance is the current owner of the specified queue, and establishes a connection between the sending process and the appropriate owner instance.
摘要:
A method and apparatus for propagating and managing data, transactions and events either within a database, or from one database to another is provided. In one embodiment, messages are propagated from a source to a first queue and a second queue with the queues associated with the same database. The connection from the source to each queue maintains its own propagation job. This method could also be employed with cluster databases.
摘要:
A method and apparatus for propagating and managing data, transactions and events either within a database, or from one database to another is provided. In one embodiment, messages are propagated from a source to a first queue and a second queue with the queues associated with the same database. The connection from the source to each queue maintains its own propagation job. This method could also be employed with cluster databases.
摘要:
Embodiments of the present invention provide one or more hardware-friendly data structures that enable efficient hardware acceleration of database operations. In particular, the present invention employs a column-store format for the database. In the database, column-groups are stored with implicit row ids (RIDs) and a RID-to-primary key column having both column-store and row-store benefits via column hopping and a heap structure for adding new data. Fixed-width column compression allow for easy hardware database processing directly on the compressed data. A global database virtual address space is utilized that allows for arithmetic derivation of any physical address of the data regardless of its location. A word compression dictionary with token compare and sort index is also provided to allow for efficient hardware-based searching of text. A tuple reconstruction process is provided as well that allows hardware to reconstruct a row by stitching together data from multiple column groups.
摘要:
Embodiments of the present invention provide one or more hardware-friendly data structures that enable efficient hardware acceleration of database operations. In particular, the present invention employs a column-store format for the database. In the database, column-groups are stored with implicit row ids (RIDs) and a RID-to-primary key column having both column-store and row-store benefits via column hopping and a heap structure for adding new data. Fixed-width column compression allow for easy hardware database processing directly on the compressed data. A global database virtual address space is utilized that allows for arithmetic derivation of any physical address of the data regardless of its location. A word compression dictionary with token compare and sort index is also provided to allow for efficient hardware-based searching of text. A tuple reconstruction process is provided as well that allows hardware to reconstruct a row by stitching together data from multiple column groups.
摘要:
Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system.
摘要:
Embodiments of the present invention provide for batch and incremental loading of data into a database. In the present invention, the loader infrastructure utilizes machine code database instructions and hardware acceleration to parallelize the load operations with the I/O operations. A large, hardware accelerator memory is used as staging cache for the load process. The load process also comprises an index profiling phase that enables balanced partitioning of the created indexes to allow for pipelined load. The online incremental loading process may also be performed while serving queries.