摘要:
A fixed-length cell multiplexing/distributing processing is disclosed. The apparatus for this processing includes a plurality of fixed-length cell transmitting apparatuses, each of which sends a fixed-length cell, and a fixed-length cell multiplexing apparatus which multiplexes the fixed-length cells received from the fixed-length cell transmitting apparatuses and outputs the thus multiplexed cells. One of the plurality of fixed-length cell transmitting apparatuses is configured as a highest-priority fixed-length cell transmitting apparatus. Further, the highest-priority fixed-length cell transmitting apparatus comprises a fixed-length cell storage section and a fixed-length cell read control section. The fixed-length cell multiplexing apparatus comprises a plurality of FIFO storage sections, an FIFO control section, and a multiplexing section. With this structure, information to be notified is divided into fixed-length cells for transmission. The fixed-length cell from the highest-priority fixed-length cell transmitting apparatus is directly output without storing it, thereby accurately performing first-come, first-served processing for fixed-length cells.
摘要:
In an ATM cell multiplexer which multiplexes ATM cells outputted from a plurality of cards, sequential cell arrival numbers common to the ATM cells outputted from a plurality of cards are added to each of the cells to be written in memories, and the read side checks a sequentiality of the cell arrival numbers whereby the ATM cells are read in the order of arrival and the cell arrival number at that time is used as the cell arrival number to be checked at the next read time. Also, when any of the cards is pulled out, a card not pulled out is operated normally and an address counter of only the pulled-out card is cleared. Even when non-sequentiality of the cell arrival numbers occurs due to the clearance of the address counter, the maximum cell arrival number is set to the value which enables the oldest cells to be read first. Furthermore, by the division of storing memories according to priority degrees, a read control identifying the ATM cells for a real time system as well as the ATM cells for a data system can be performed.
摘要:
An optical network unit and an optical line terminal which efficiently control the data receiving and dechurning processes in a passive optical network. In a churning parameter memory subsystem, a first memory bank stores churning parameters that are currently used, while a second memory bank stores updates made to the churning parameters. Under the control of the churning parameter memory subsystem, those first and second memory banks change their roles with each other at a churning key updating time point. A data dechurning unit receives a data stream consisting of a plurality of frames and dechurns the information contained in the data stream, according to the stored churning parameters. When an update is done to the parameters in a certain frame, the data dechurning unit makes the update effective at the next frame, thus starting data dechurning operations from the next frame.
摘要:
A delay adjustment unit and optical network unit which efficiently adjust the delay time of upstream data to improve the quality of data transmission services. Upstream transmission data is segmented into small data blocks. A downstream receiver receives control information on which upstream data blocks are granted or not granted to the unit. It also received an equalization delay update request that commands the unit to update its current equalization delay parameter. When this equalization delay update request is received, a delay adjustment controller controls the delay of each data block, depending on whether the transmission of each subsequent data block is granted. An upstream transmitter then transmits the granted data blocks with the adjusted delay times.
摘要:
An optical network unit and an optical line terminal which efficiently control the data receiving and dechurning processes in a passive optical network. In a churning parameter memory subsystem, a first memory bank stores churning parameters that are currently used, while a second memory bank stores updates made to the churning parameters. Under the control of the churning parameter memory subsystem, those first and second memory banks change their roles with each other at a churning key updating time point. A data dechurning unit receives a data stream consisting of a plurality of frames and dechurns the information contained in the data stream, according to the stored churning parameters. When an update is done to the parameters in a certain frame, the data dechurning unit makes the update effective at the next frame, thus starting data dechurning operations from the next frame.
摘要:
An optical network unit and an optical line terminal which efficiently control the data receiving and dechurning processes in a passive optical network. In a churning parameter memory subsystem, a first memory bank stores churning parameters that are currently used, while a second memory bank stores updates made to the churning parameters. Under the control of the churning parameter memory subsystem, those first and second memory banks change their roles with each other at a churning key updating time point. A data dechurning unit receives a data stream consisting of a plurality of frames and dechurns the information contained in the data stream, according to the stored churning parameters. When an update is done to the parameters in a certain frame, the data dechurning unit makes the update effective at the next frame, thus starting data dechurning operations from the next frame.
摘要:
An optical subscriber line terminal unit and a state transition control method are provided which are capable of stabilizing operation. State information storing means, which is a nonvolatile memory, stores state information about the unit. Startup preparatory state shifting means causes a shift to a startup preparatory state at startup. Flag setting means sets an emergency stop state flag during the period of the startup preparatory state if it is judged based on the state information that a state before the startup is an emergency stop state. State transition control means causes a shift from the startup preparatory state to the emergency stop state if the emergency stop state flag is set, and causes a shift from the startup preparatory state to an initial state if the emergency stop state flag is not set.
摘要:
A packet processing apparatus includes a packet buffer with a queue for storing packets. An actual queue length/position discriminator acquires, at every sampling period, the latest actual queue length indicating the occupancy status of the queue, determines the positional relationship of the actual queue length to a random early detection interval, and outputs the positional relationship as position information. A discard probability computation processor calculates, at every sampling period, a packet discard probability based on the position information. A packet discard processor discards, at every sampling period and in accordance with the discard probability, packets that are not yet stored in the queue. If it is judged from the position information that the actual queue length is within the random early detection interval, the discard probability computation processor calculates an average queue length, and then calculates the discard probability from the ratio of a discard target to a reception target.
摘要:
A packet processing apparatus includes a packet buffer with a queue for storing packets. An actual queue length/position discriminator acquires, at every sampling period, the latest actual queue length indicating the occupancy status of the queue, determines the positional relationship of the actual queue length to a random early detection interval, and outputs the positional relationship as position information. A discard probability computation processor calculates, at every sampling period, a packet discard probability based on the position information. A packet discard processor discards, at every sampling period and in accordance with the discard probability, packets that are not yet stored in the queue. If it is judged from the position information that the actual queue length is within the random early detection interval, the discard probability computation processor calculates an average queue length, and then calculates the discard probability from the ratio of a discard target to a reception target.