摘要:
A new routing protocol for routing service requests in a contact center is provided that takes into account agent preferences. Agents identify their preferences for handling particular types of service requests. The routing protocol takes account of those preferences while still routing calls in a systematic, coordinated and efficient manner. Additionally, management may communicate incentives dynamically to agents to incentivize agents to change their preferences in ways that corresponds to management priorities. Management may further influence routing by adjusting management preferences, which may be taken into account along with agent preferences when routing calls. By incorporating agent preferences in the routing scheme, agents are given more control over their work, thus tending to increase job satisfaction and therefore agent retention and contact-center performance.
摘要:
Techniques for (a) controlling admission of customers to a shared resource, (b) adjusting the capacity of a resource in light of new customer demand, and (c) diverting usage from a failed resource to alternative resources, each use a "blocking probability computer" (BPC) to solve a resource-sharing model that has a product-form steady-state distribution. The techniques allow each customer to obtain an appropriate grade of service and protection against overloads from other customers. Each customer is a source of a series of requests, and is assigned "upper-limit" (UL) and "guaranteed-minimum" (GM) "bounds" on its requests. The upper limit bound puts an upper limit on the number of requests from that customer that can be in service at any time. The guaranteed-minimum bound guarantees that there will always be available resource units in the resources to serve a specified number of requests from that customer. The desired blocking probabilities are directly expressed in terms of normalization constants appearing in the product-form steady-state distribution. The BPC computes the normalization constants by first constructing the generating function (or z-transform) of the normalizing constant and then numerically inverting the generating function.
摘要:
A method determines the number of servers as a function of time required for a finite server queueing system based on a projected load. The number of servers is chosen subject to the constraint that the probability of delay before beginning service does not exceed a target probability at all times. The finite server queueing system is first advantageously modeled as an infinite-server system and the mean and variance of the number of busy servers at time t is determined and a distribution is approximated. For any time, the number of servers is chosen to be the least number so that the probability that these are all busy is less than the target probability. As an optional refinement, the infinite server target tail probability is then related to the actual delay probability in the finite server system.
摘要:
A system for, and method of operation of, estimating a blocking probability for at least a portion of a network and a model of the same are provided. The blocking probability represents a likelihood that a transmitted signal will arrive at least at the portion of the network. The system includes an estimator controller and a processing controller. The estimator controller derives both a direct and an indirect estimator. The direct controller is derived as a function of a number of losses and arrivals occurring with respect to at least the portion of the network during a period of time, and the indirect estimator is derived as a function of an offered load with respect to at least the portion of the network during the period of time. The processing controller, that is associated with the estimator controller, applies a weighting factor to the direct and indirect estimators to derive the blocking probability. The weighting factor minimizes variance of combined estimates.
摘要:
A method is provided for admitting new requests for service in a shared resource having a capacity. The new request has service priority levels associated therewith. In one embodiment of the invention, for example, the shared resource may be a packet communications network and the service request may be a request to admit a new connection. The method proceeds as follows. First, for each service priority level on said shared resource, a total effective bandwidth is generated which is represented by a sum of individual effective bandwidths of previously admitted requests for service. Subsequent to receiving a new request for service having a specified priority of service level, a plurality of effective bandwidths are accessed for the new request. The plurality of effective bandwidths are respectively associated with the specified service priority level and service priority levels therebelow. The new request is admitted if, for the specified service priority level and for each service priority level therebelow, the sum of (i) said total effective bandwidth for a given service priority level and (ii) for said new request, the effective bandwidth at the given service priority is less than the capacity.
摘要:
A new routing protocol for routing service requests in a contact center is provided that takes into account the results of games played by the agents and the game actions of the agents. The system communicates the game to the agent. Agents in turn select game actions. The results of the agent games and the agent actions in these games help determine which agents handle the different types of service requests. The routing protocol takes account of those game results and actions while still routing calls in a systematic, coordinated and efficient manner. Additionally, by dynamically restructuring the game, management may communicate incentives dynamically to agents to incentivize agents to change their game actions in ways that lead to call routing following management priorities. Management may further influence routing by adjusting management preferences, which may be taken into account along with agent game results and game actions when routing calls. By incorporating agent game results and game actions in the routing scheme, agents are engaged and entertained, so that their work is less boring and monotonous. The agents are also given more control over their work, thus tending to increase job satisfaction and therefore agent retention and contact-center performance.
摘要:
Apparatus and method for predicting wait times for queuing customers. Upon arrival of a new customer to the queue, or at any other desired time, a system classifies each customer in service according to one or more attributes. The system generates a probability distribution of the remaining service time for each customer based on the attributes. Preferably, the system classifies each customer in queue according to one or more attributes and generates a probability distribution of service time based on the attributes. From the probability distributions of the customers in service and the customers in queue, the system estimates a wait time for the new customer. The estimated wait time may be communicated to the customers or to a system administrator and may include information on the full waiting time distribution or a summary of the distribution.
摘要:
Because it is difficult to provide adequate quality of service to large-bandwidth calls in integrated-services networks, service providers may allow some customers to book ahead their calls. The present invention provides a scheme resource sharing among book-ahead calls (that announce their call initiation and holding times upon arrival) and non-book-ahead calls (that do not announce their holding times and enter service immediately, if admitted). The basis for this sharing is an admission control algorithm in which admission is allowed if an approximate interrupt probability (computed in real time) is below a threshold. Simulation experiments show that the proposed admission control methodology is superior to alternative schemes that do not allow interruption, such as a strict partitioning of resources.
摘要:
A fair and efficient admission scheme enables sharing of a common resource among N traffic classes, such that each class is guaranteed (on a suitable long-term basis) a contracted minimum use of the resource, and each class can go beyond the contract when extra resource capacity becomes temporarily available. The scheme operates in an open loop mode, and thus does not require information describing the current status of the resource. For the purposes of description, one form of the invention is best described in terms of "tokens" and "token banks" with finite sizes. Our scheme uses one token bank per class (bank `i`, i= 1, . . . ,N), plus one spare bank. Class `i` is preassigned a rate, rate(i), i=1, . . . ,N, that represents the "guaranteed throughput" or contracted admission rate for class `i`. Tokens are sent to bank `i` at rate(i). Tokens that find a full bank are sent to the spare bank. When the spare bank is also full, the token is lost. Every admitted arrival consumes a token. A class `i` arrival looks first for a token at bank `i`. If bank `i` is empty, it looks for a token at the spare bank. If the latter is also empty, the arrival is blocked.
摘要:
A method for alleviating congestion problems in prior art networks delays provision of dial tone signals to terminals that are, likely, carrying out a re-dial attempt in excess of a preselected number of re-dial attempts. A determination that the terminal seeking to establish a connection is likely carrying out a re-dial attempt may be based on numerous factors, such as the time since the last time the terminal desired to establish a call, the duration of the last call, etc. The delay that is imposed is, advantageously, sensitive to the number of times the terminal has attempted a re-dial, and on other conditions, such as the cause of the failure to establish a connection, network congestion conditions, etc. In imposing the dial tone delay, identities of the terminals that are to receive a delayed dial tone are placed in a FIFO queue.