Abstract:
A multilayer parallel processing apparatus. The multilayer parallel processing apparatus includes two or more hierarchical parallel processing units, each configured to process flow data corresponding to a hierarchy that is allocated thereto in response to inputting pieces of flow data configured with two or more hierarchies, and a common database configured to be accessed by the two or more hierarchical parallel processing units and store processing results of each of the hierarchical parallel processing units.
Abstract:
A parallel processing system determines whether to drive all or some processors so as to process data that are input based on capacity or time for processing the input data. Also, the system temporarily stores the data that are processed and output by the respective processors, and controls the same to be output when it becomes the calculated output time based on the traffic processing time for the input data.
Abstract:
A time stamping apparatus and method for network timing synchronization are provided. A receiving apparatus receives data from a transmitting apparatus, generates a synchronization pulse signal synchronized with a local clock of the transmitting apparatus based on the received data, wherein the received data include information regarding the transmission time of the data, measured using the local clock of the transmitting apparatus, and the receiving apparatus measures the reception time of the data using the synchronization pulse signal. Therefore, accurate network timing synchronization may be achieved.
Abstract:
Provided is a method and apparatus for synchronizing a time of day (TOD) in a convergent network, wherein the TOD is received from a time server connected in the convergent network and is provided to a terminal connected in a wired or wireless network, specifically a terminal connected in a heterogeneous network, that requires TOD information. The apparatus includes a time server that provides standard TOD information, a gateway or a host personal computer (PC) that provides the standard TOD information of the time server to the terminal in a 3rd layer or lower instead of an upper layer of the open system interconnection (OSI) 7 layer model, and the terminal that adjusts a local clock according to the provided standard TOD information. According to the method and apparatus, the terminal not only maintains a very precise TOD by obtaining TOD information of the time server periodically or when required, but also obtains the TOD information without using application software for processing the TOD information. Accordingly, power consumption of the terminal is decreased.
Abstract:
Provided are a time synchronization method allowing a fixed time delay and a bridge that is interposed between a master and a slave, according to the method. In the bridge, a predetermined time after the synchronization packet is set as an output time of the synchronization packet and the synchronization packet is output at the output time. Accordingly, it is possible to delay synchronization packets in the bridge for the same time, thereby increasing the time synchronization precision.
Abstract:
A method for scheduling an input and output buffered ATM or packet switch and, more particularly, to a method for cell-scheduling an input and output buffered switch that is adapted to a high-speed large switch is provided. The input and output buffered switch has multiple switching planes, and its structure is used to compensated for decreasing performance of the input buffered switch resulting from HOL (head-of-line) blocking of the input buffered switch. The input and output buffered switch consists of input buffer modules grouping several input ports and output ports and output buffer modules, and each input buffer module has several FIFO queues for the associated module output buffer modules. In the input and output buffered switch having multiple switching planes, cell scheduling is carried out using a simple iterative matching (SIM) method. The SIM method consists of three operations, those are, request operation, grant operation, and accepting operation, and in the SIM method, the operations are iteratively carried out several times in one cell period, thereby matching efficiency can be increased. Each input buffered module determines simultaneously multiple FIFO queues served in one cell period, so that the SIM method with multiple selection ability has higher speed operations and better performance than conventional scheduling methods.
Abstract:
A two-dimensional round-robin scheduling method with multiple selection is provided. The two-dimensional round-robin scheduling method in accordance with an embodiment of the present invention includes following steps. First step is for checking whether a request is received from the input buffer module and building mxm request matrix r(i,j), i,j=1, . . . , m. Second step is for setting mxm search pattern matrix, d(i,j), i,j=1, . . . , m. The search pattern matrix describes search sequence, S=1, . . . , m. Third step is for initializing elements of mxm allocation matrix a(i,j), i,j=1, . . . , m. The allocation matrix contains information whether transmission request is accepted and which switching plane the accepted request uses in transmission. Fourth step is for examining a request matrix in accordance with the search sequence S and finding r(i,j) that sent a request. Fifth step is for setting a(i,j) for all (i,j) pairs found in the fourth step so that elements of allocation matrix at ith row have different values in range from 1 to n and elements of allocation matrix at jth column have different values in range from 1 to n. Sixth step is for repeating the fourth step and the fifth step as the search sequence S is increased from 1 to m by 1.
Abstract:
A method for using a nibble(partial bits of word) inversion code in a network system includes the steps of: a) adding 1 redundancy bit to n bit source data and generating a pre-code, n being an even number of 2 or over; b) deciding the number of transitions in the generated pre-code; c) determining the pre-code as a code word if the number of transitions in the pre-code is greater than or equal to 1+n/2 in a deciding result; d) inverting alternate bits including the redundancy bit among bits constructing the pre-code and generating the code word, if the number of transitions in the pre-code is less than n/2 in the deciding result; e) determining the pre-code as the code word in case that the number of transitions in the pre-code is equal to n/2 and simultaneously the source data is not an in-band signaling and not a special word in the deciding result; and f) inverting the nibble among the bits constructing the pre-code and generating the code word, in case that the number of transitions in the pre-code is equal to n/2 and simultaneously the source data is an in-band signaling or is a special word in the deciding result.
Abstract:
A voltage controlled ring oscillator having a reduced voltage controlled oscillator (VCO) gain by controlling only the fall time of the period of the VCO using integrated circuits and logic circuits. The VCO includes a mixer/inverter circuit, a logic circuit, a delay/inverter circuit, a first delay circuit, a second delay circuit, and a third delay circuit. The VCO gain is reduced by controlling only one pulse width of the logic level High and one pulse width of the logic level Low of the oscillating period. Furthermore, the VCO can be logically controlled by using a simple logic circuit as a component of the VCO.
Abstract:
Provided is a data flow parallel processing apparatus and method. The data flow parallel processing apparatus may include a flow discriminating unit to discriminate a flow of input first data, a processor allocating unit to allocate, to the first data, a processor that is not operating among a plurality of processors, a sequence determining unit to determine a sequence number of the first data when a second data having the same flow as the discriminated flow is being processed by any one processor composing the plurality of processors, and an alignment unit to receive the first data processed by the allocated processor and to output the received first data based on the determined sequence number.