摘要:
A method and apparatus are disclosed for selectively accessing and independently sizing connection identification tables within a network switch. Upon receipt of a data unit, such as an asynchronous transfer mode cell at an input port of an Input Output Module, a plurality of selection bits are inserted into the cell header which are used to select one of a plurality of connection identifiers. The VPI/VCI addresses are mapped into a connection identifier and the connection identifier is also inserted within the respective cell header. The connection identifier may comprise an unicast connection identifier, a multicast connection identifier, or a redundant unicast identifier. The unicast, multicast and redundant identifiers are either stored in separate tables at call setup and the tables are independently selectable via use of the selection bits stored within the cell header. Alternatively, the selection bits are employed to specify a register which points to the first address for the respective type of identifiers. Thus, identifiers of a particular connection type are accessible via a base address plus an offset address. In the described manner, tables or portions of tables containing connection identifiers may be flexibly and independently sized.
摘要:
A massively-parallel computer includes a plurality of processing nodes and at least one control node interconnected by a network. The network faciliates the transfer of data among the processing nodes and of commands from the control node to the processing nodes. Each each processing node includes an interface for transmitting data over, and receiving data and commands from, the network, at least one memory module for storing data, a node processor and an auxiliary processor. The node processor receives commands received by the interface and processes data in response thereto, in the process generating memory access requests for facilitating the retrieval of data from or storage of data in the memory module. The node processor further controlling the transfer of data over the network by the interface. The auxiliary processor is connected to the memory module and the node processor. In response to memory access requests from the node processor, the auxiliary processor performs a memory access operation to store data received from the node processor in the memory module, or to retrieve data from the memory module for transfer to the node processor. In response to auxiliary processing instructions from the node processor, the auxiliary processor performs data processing operations in connection with data in the memory module.
摘要:
A system and method provisions dynamic call models within a network having a serving node for providing session control for user endpoint (UE) devices. A user endpoint device (UE) with agent logic, expresses dynamic context of the UE in a message and sends said dynamic context message to a serving node. The dynamic context includes a subset of devices that could be used as UEs or associated devices, network connections that terminate or emanate from said devices that could be used as UEs or associated devices, and capabilities of said devices that could be used as UEs or associated devices. The serving node logic receives said context message from the UE and constructs a dynamic call model in response. The dynamic call model having filter codes to associate service codes with application servers (ASs) in communication with the network, and each AS having service logic to provide a service.
摘要:
A system and method provisions dynamic call models within a network having a serving node for providing session control for user endpoint (UE) devices. A user endpoint device (UE) with agent logic, expresses dynamic context of the UE in a message and sends said dynamic context message to a serving node. The dynamic context includes a subset of devices that could be used as UEs or associated devices, network connections that terminate or emanate from said devices that could be used as UEs or associated devices, and capabilities of said devices that could be used as UEs or associated devices. The serving node logic receives said context message from the UE and constructs a dynamic call model in response. The dynamic call model having filter codes to associate service codes with application servers (ASs) in communication with the network, and each AS having service logic to provide a service.
摘要:
A card cage for mounting printed circuit cards of at least two sizes is disclosed. The card cage includes an insert removably mounted to a mounting bar position between first and second ends of the card cage. The insert extends along a portion of the width of the card cage. When the insert is mounted to the mounting bar, a printed circuit card of a first length may be disposed between the insert and one of the card cage ends. When the insert is removed, a printed circuit card of a second length greater than the first length may be mounted between the first and second ends of the card cage. Plural mounting bars may be located between the respective card cage ends to accommodate printed circuit cards of different lengths.
摘要:
A queue control system is disclosed for use in connection with the transfer of information, in the form of information transfer units, in a digital network. The network provides a plurality of service rate classes, based on, for example transmission rates for the various paths. The information buffer control subsystem includes a information transfer unit receiver, a information transfer unit buffer and a group controller. The information transfer unit receiver receives the information transfer units, and the buffer is provided to buffer the received information transfer units prior to transmission. The group controller controls the buffering of information transfer units received by the information transfer unit receiver in the buffer. In that operation, the group controller aggregates the information transfer units for each path in the buffer according to respective service rate classes, in particular aggregating the information transfer units for each path in a queue and further aggregating the queues for the paths associated with each service rate class in a queue. A transmission scheduler is also disclosed for use in transferring information, in the form of information transfer units, each associated with a path, in a digital network. The network provides a plurality of service rate classes, based on, for example, transmission rates for the various paths. The information transfer units for each path in a path queue, and the path queues for the paths associated with each service rate class are aggregated in a service rate queue. The transmission scheduler includes a information transfer unit selector for selecting from among the service rate queues, one path queue to provide a information transfer unit for transmission, and a information transfer unit transmitter for transmitting the information transfer unit provided by the selected path queue.
摘要:
First and second control processor cards are employed in conjunction with first and second switch fabric cards to interconnect Input/Output cards in a telecommunications switch. The control processor cards provide a portion of the functionality previously associated with switch fabric cards, such as exertion of control over allocation of bandwidth within the switch. The control processor cards also provide new functionality. In particular, each control processor card can configure both switch fabric cards. Redundant control processor cards and redundant switch fabric cards are employed to provide a switch that is less susceptible to failure than switches with only redundant switch fabric cards. Hence, failure of a control processor card and a switch fabric card can be sustained without resulting in switch failure. Timing control functions may also be provided by a separate timing module card.
摘要:
A packet scheduler is disclosed which provides a high degree of fairness in scheduling packets associated with different sessions. The scheduler also minimizes packet delay for packet transmission from a plurality of sessions which may have different requirements and may operate at different transfer rates. When a packet is received by the scheduler, the packet is assigned its own packet virtual start time based on whether the session has any pending packets and the values of the virtual finish time of the previous packet in the session and the packets arrival time. The scheduler then determines a virtual finish time of the packet by determining the transfer time required for the packet based upon its length and rate and by adding the transfer time to the packet virtual start time of the packet. The packet with the smallest virtual finish time is then scheduled for transfer. By selecting packets for transmission in the above described manner, the available bandwidth may be shared in pro-rata proportion to the guaranteed session rate, thereby providing a scheduler with a high degree of fairness while also minimizing the amount of time a packet waits in the scheduler before being served.
摘要:
A digital computer comprises a plurality of processing elements, a communications router, and a control network. Each processing element performs data processing operations in connection with commands, at least some of the processing elements performing the data processing operations in connection with the commands in messages they receive over the control network. Each processing element also generates and receives data transfer messages, each including an address portion containing an address, for transfer to another processing element as identified by the address. At least one of the processing elements further generates the control network messages for transfer over the communications router. The communications router comprises router nodes interconnected in the form of a "fat-tree," and the control network comprises control network nodes interconnected in the form of a tree, with the processing elements being connected at the leaf nodes of the respective communications router and control network.
摘要:
A method of associating multiple user endpoints (UEs) with a single IMS session in an IMS network having a serving node for controlling at least one IMS session for a user and at least a first access network for providing access to UEs. The method involves associating a first UE with the user and with an IMS session; discovering a second UE in a proximity of the first UE; discovering information about the second UE; communicating the information about the second UE to the serving node; the serving node utilizing computer-implemented policy logic to determine whether to associate the second UE with the user and the IMS session; and if the policy logic determines that the second UE is to be associated, the serving node associating the second UE with the IMS session while retaining the association with the first UE.