Abstract:
The invention concerns a receiving circuit of a communications link comprising: a first data buffer (336) configured to input, under control of a first clock signal (CLK_V"), data of a first data stream transmitted by a transmitting circuit (302), and to generate a credit trigger signal indicating when a data value is read from the first data buffer (336), wherein data is read from the first data buffer (336), or from a further data buffer (338) coupled to the output of the first data buffer (336), under control of a second clock signal (CLK R); and a credit generation circuit (342) configured to generate, based on the credit trigger signal, a credit signal for transmission to the transmitting circuit (302) under control of the first clock signal (CLK_V"), the credit signal indicating that one or more further data values of the first data stream can be transmitted by the transmitting circuit.
Abstract:
A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.
Abstract:
The continuation of the inability to reproduce data in a receiving apparatus is avoided. A buffer (1b) temporarily stores data received from a network (2a) by a receiving means (1a). An output mode switching means (1c) switches the mode in which the data received by the receiving means (1a) is output to the buffer (1b), between FIFO and FILO, in accordance with the storage amount of data temporarily stored in the buffer (1b). For example, if the data temporarily stored in the buffer (1b) falls below a given threshold value of the buffer (1b), data is stored in the buffer (1b) in FIFO. If the data temporarily stored in the buffer (1b) exceeds a given threshold value of the buffer (1b), data is stored in the buffer (1b) in FILO. A sending means (1d) outputs data taken from the buffer (1b) in FIFO or FILO, to a network (2b).
Abstract:
Scheduler methods and apparatus utilize a weight limited FIFO (WLF) method to provide weighted per-connection queuing while maximizing preservation of cell arrival order, thus minimizing additional cell delay variation (CDV) added during scheduling. The invention minimizes additional CDV of a connection until the connection exceeds its fair share of resource utilization.
Abstract:
Disclosed is a data caching method, comprising: according to an input port number of a cell, storing the cell in a corresponding first-in first-out queue; determining that a cell to be dequeued can be dequeued in the current K th cycle, scheduling for the cell to be dequeued to be dequeued, acquiring the actual value of the number of splicing units occupied by the cell to be dequeued, and storing the cell to be dequeued in a register the same number of bits wide as a bus in a cell splicing manner, wherein determining that the cell to be dequeued can be dequeued is conducted in accordance with the fact that a first back pressure count value of the (K-1) th cycle is less than or equal to a first preset threshold value, and the first back pressure count value of the (K-1) th cycle is obtained in accordance with an estimated value of the number of the splicing units occupied when the previous cell to be dequeued is dequeued, the number of splicing units capable of being transmitted by the bus in each cycle, and a first back pressure count value of the (K-2) th cycle. Also disclosed at the same time are a data caching device and a storage medium.
Abstract:
The present disclosure relates to a data processing device, a receiving device, a data processing method, and a program capable of suppressing degradation in quality in a case of reproducing data. Packet selection units select, from a multiplexed stream obtained by multiplexing a plurality of service streams, packets configuring respective service streams and generate one service stream. Insertion units insert null packets with time information, in which predetermined time information has been given to payloads, to spaces which have become empty when a predetermined number of the packet selection units generate the one service stream. Thereafter, in the streams which have been demultiplexed after being multiplexed, the timing to output the null packets with time information is adjusted with reference to the time information given to the null packets with time information. The present technology can be applied to, for example, a receiving device that can receive a plurality of streams.
Abstract:
An annular optical buffer (100) and methods for storing and reading an optical signal are disclosed. The optical buffer (100) includes: a first bent straight-through waveguide (101a), functioning as a transmission bus of an optical signal; multiple optical delay waveguide loops (103), configured to temporarily store optical signals; multiple pairs of optical switches (102), whose quantity is the same as that of the multiple optical delay waveguide loops (103), where each pair of optical switches (102) are configured to control on and off of an optical paththat is ontwo arms of the first bent straight-through waveguide (101a) and two sides of an optical delay waveguide loop (103) corresponding to each pair of optical switches (102); a beamsplitter (106), configured to obtain a part of optical signal by splitting the optical signal that is input from an input end and transfer the part of optical signal to a controller (105) through a second bent straight-through waveguide (101c); a slow light effect waveguide (104a), configured to slow a transmission rate of an optical signal transmitted within the slow light effect waveguide; and the controller (105), configured to control storage and read of the optical signal. By means of the foregoing annular optical buffer (100), monolithic integration of an optical buffer and unordered random storage and reading of an optical signal can be implemented.
Abstract:
A data caching method for an Ethernet device is provided. The method includes: receiving data frames from various Ethernet interfaces and converting the Ethernet data frames received from the Ethernet interfaces into data frames having a uniform bit width and a uniform encapsulation format; maintaining a cache address in which data has already been written and a currently idle cache address in a cache; receiving the currently idle cache address and generating a write instruction and/or a read instruction for the cache and performing a write operation and/or a read operation so as to write the data received and processed by an IPC into the currently idle cache or to read data from the cache; and performing bit conversion and format encapsulation on the data that is read according to a read request and outputting the data subjected to the bit conversion and the format encapsulation through a corresponding Ethernet interface. A data caching system for an Ethernet device is also provided. By means of the data caching method and system provided herein, the expandability and the high bandwidth storage capacity of a network switching device can be improved, a high bandwidth utilization rate is achieved, and it becomes possible to improve bandwidth utilization rate based on traffic management.