摘要:
A system and method of processing a message in an asynchronous architecture is provided. In the method, a determination is made that a response to a message sent by an instance of software code is to be received, where the response indicates whether the message succeeded or failed. Another determination is made as to whether the response has been received. If the response has not been received, the instance of the software code is stored in memory, thereby suspending the instance. The response is received, the instance resumed and the response is processed.
摘要:
A system and method of processing a message in an asynchronous architecture is provided. In the method, a determination is made that a response to a message sent by an instance of software code is to be received, where the response indicates whether the message succeeded or failed. Another determination is made as to whether the response has been received. If the response has not been received, the instance of the software code is stored in memory, thereby suspending the instance. The response is received, the instance resumed and the response is processed.
摘要:
A transport-neutral in-order delivery in a distributed environment is provided. Typically, in-order delivery guarantees that sequential orders received by a transport engine are sent out in the same order they are received. Such delivery may be forwarded either to another transport engine or to some application. In case there is a failure of delivery of messages in a stream, the messages are either resubmitted, suspended, or moved to backup. A user or administrator can configure the desired action. Additionally, any stream can be manually aborted or a specified port can be unenlisted. Deliverable streams of messages are locked on to a back-end transport engines or applications and dequeued sequentially unless one of the above mentioned failure scenarios occurs.
摘要:
Systems and methods for reducing the latency incurred during the publication of a message in a message publication system are provided. In a message publication system wherein the publishing component and the receiving component are located within the same processing space, several of the latency components that are usually unavoidably incurred may be eliminated. In such a system, the messaging queue is not used as a medium between the two components but is instead used as a secondary back-up storage. This results in the elimination of one latency component as the message is directly published from the publishing component to the receiving component. Further time reductions or optimizations occur when the durability, or reliability, of the message publication is not a concern and the messaging queue can be completely disregarded. Yet another optimization occurs when the identity of the subscriber is known in advance by the publisher.
摘要:
A transport-neutral in-order delivery in a distributed environment is provided. Typically, in-order delivery guarantees that sequential orders received by a transport engine are sent out in the same order they are received. Such delivery may be forwarded either to another transport engine or to some application. In case there is a failure of delivery of messages in a stream, the messages are either resubmitted, suspended, or moved to backup. A user or administrator can configure the desired action. Additionally, any stream can be manually aborted or a specified port can be unenlisted. Deliverable streams of messages are locked on to a back-end transport engines or applications and dequeued sequentially unless one of the above mentioned failure scenarios occurs.
摘要:
Systems and methods for reducing the latency incurred during the publication of a message in a message publication system are provided. In a message publication system wherein the publishing component and the receiving component are located within the same processing space, several of the latency components that are usually unavoidably incurred may be eliminated. In such a system, the messaging queue is not used as a medium between the two components but is instead used as a secondary back-up storage. This results in the elimination of one latency component as the message is directly published from the publishing component to the receiving component. Further time reductions or optimizations occur when the durability, or reliability, of the message publication is not a concern and the messaging queue can be completely disregarded. Yet another optimization occurs when the identity of the subscriber is known in advance by the publisher.
摘要:
The subject invention provides a system and/or a method that facilitates enhancing an adapter utilizing a locking mechanism between a receive location and a process. An interface component can receive a message related to a receive location that is an endpoint. A lock component binds the receive location to the process such that the process exclusively receives the messages from the endpoint at a single instance in real time. Moreover, the lock component can provide a replacement/switching technique, wherein a process that participates in a locking relationship can be switched with another process based at least in part upon the health of the process.
摘要:
The subject invention provides a system and/or a method that facilitates enhancing an adapter utilizing a locking mechanism between a receive location and a process. An interface component can receive a message related to a receive location that is an endpoint. A lock component binds the receive location to the process such that the process exclusively receives the messages from the endpoint at a single instance in real time. Moreover, the lock component can provide a replacement/switching technique, wherein a process that participates in a locking relationship can be switched with another process based at least in part upon the health of the process.
摘要:
Transport agnostic pull mode messaging service enables clients of diverse types to send and receive messages to one another while guaranteeing delivery of messages. Client specific adapters connect to a server and pull messages waiting for them in a queue. Clients may specify themselves as the recipients of the pulled messages, or specify another client as a recipient. This allows users of diverse types of clients to communicate and provides users with greater flexibility regarding how, when, and where they view their messages.
摘要:
The present invention provides a novel technique for Web-based asynchronous processing of synchronous requests. The systems and methods of the present invention utilize a synchronous interface in order to couple with systems that synchronously communicate (e.g., to submit queries and receive results). The interface enables reception of synchronous requests, which are queued and parsed amongst subscribed processing servers within a server farm. Respective servers can serially and/or concurrently process the request and/or portions thereof via a dynamic balancing approach. Such approach distributes the request to servers based on server load, wherein respective portions can be re-allocated as server load changes. Results can be correlated with the request, aggregated, and returned such that it appears to the requester that the request was synchronously serviced. The foregoing mitigates the need for clients to perform client-side aggregation of asynchronous results.