摘要:
A method includes assigning each of a plurality of disk write and disk read requests to respective ones of a plurality of queues. Each queue has an occupancy level and a weight. A score is assigned to each of the plurality of queues, based on the occupancy and weight of the respective queue. An operation type is selected to be granted a next disk access. The selection is from the group consisting of disk write, disk read, and processor request. One of the queues is selected based on the score assigned to each queue, if the selected operation type is disk write request or disk read request. The next disk access is granted to the selected operation type and, if the selected operation type is disk write or disk read, to the selected queue.
摘要:
A method comprises providing a free buffer pool in a memory including a non-negative number of free buffers that are not allocated to a queue for buffering data. A request is received to add one of the free buffers to the queue. One of the free buffers is allocated to the queue in response to the request, if the queue has fewer than a first predetermined number of buffers associated with a session type of the queue. One of the free buffers is allocated to the queue, if a number of buffers in the queue is at least as large as the first predetermined number and less than a second predetermined number associated with the session type, and the number of free buffers is greater than zero.
摘要:
A method includes storing video data in a disk by way of a first queue comprising a linked list of buffers. Video data are received into the first queue by way of a tail buffer. The tail buffer is at one end of the linked list of buffers in the first queue. Video data are copied from a head buffer to the disk. The head buffer is at another end of the linked list of buffers in the first queue. The video data are displayed in real-time directly from the buffers in the queue, without retrieving the displayed video data from the disk, and without interrupting the storing step.
摘要:
A hardware accelerated streaming arrangement, especially for RTP real time protocol streaming, employs a directing file determining the pointers, header lengths and offsets of a block of one or more data packets to be sent out through a network accelerated streaming system. The directing file is established by a control processor, for example working in the background, and is stored to provide information making it possible to determine certain information including header sizes and pointers to RTP payload and other data, without the need during egress of the data for analysis related to the type of media or protocol concerned. This relieves the control processor of functions that would otherwise require attention, and permits the egress process to proceed in a repetitive manner, preferably relying insofar as possible on hardware elements for speed and reserving the control processors computational capacity for control functions that may be more complex but are infrequent and/or not time sensitive for streaming in real time.
摘要:
A hardware accelerated streaming arrangement, especially for RTP real time protocol streaming, directs data packets for one or more streams between sources and destinations, using addressing and handling criteria that are determined in part from control packets and are used to alter or supplement headers associated with the stream content packets. A programmed control processor responds to control packets in RTCP or RTSP format, whereby the handling or direction of RTP packets can be changed. The control processor stores data for the new addressing and handling criteria in a memory accessible to a hardware accelerator, arranged to store the criteria for multiple ongoing streams at the same time. When a content packet is received, its addressing and handling criteria are found in the memory and applied, by action of the network accelerator, without the need for computation by the control processor. The network accelerator operates repetitively to continue to apply the criteria to the packets for a given stream as the stream continues, and can operate as a high date rate pipeline. The processor can be programmed to revise the criteria in a versatile manner, including using extensive computation if necessary, because the processor is relieved of repetitive processing duties accomplished by the network accelerator.
摘要:
In some examples, a protocol accelerator extracts a queue identifier from an incoming packet, for identifying a first buffer queue in which the packet is to be stored for transport layer processing. A packet having an error or condition is identified, such that the accelerator cannot perform the processing on that packet. A processor is interrupted. The identified packet is stored in a second buffer queue. The processor performs transport layer processing in response to the interrupt, while the accelerator continues transport layer processing of packets in the first buffer queue. In some examples, a TCP congestion window size is adjusted. A programmable congestion window increment value is provided. The window size is set to an initial value at the beginning of a TCP data transmission. The window size is increased by the increment value when an acknowledgement is received.
摘要:
A scheduler for shared network resources implementing a plurality of user selectable data scheduling schemes within a single hardware device. The schemes include strict priority, priority for one class plus smooth deficit weighted round robin for the other classes, bandwidth limited strict priority and smooth deficit weighted round robin for all user classes. The network operator selects one of the four schemes by enabling or disabling certain bits in the hardware device.
摘要:
An apparatus and method are provided for extracting connection information from a traffic header in a communications network. The apparatus includes a first storage element containing a first look-up table for determining a first data packet header offset and data size for extracting a communications protocol type from the header and a second storage element containing a second look-up table for determining from the communications protocol type a second data packet header offset and second data size for extracting a connection address from the header. The storage elements may be in the form of content-addressable memories. Exception handling and hardware initialization can be controlled by a system processor.
摘要:
In one embodiment, the present invention is a method for performing incremental preamble detection in a wireless communication network. The method processes non-overlapping chunks of incoming antenna data, where each chunk is smaller than the preamble length, to detect the signature of the transmitted preamble. For each chunk processed, chips of the chunk are correlated with possible signatures employed by the wireless network to update a set of correlation profiles, each profile comprising a plurality of profile values. Further, an intermediate detection is performed by comparing the updated profile values to an intermediate threshold that is also updated for each chunk. Upon receiving the final chunk, the correlation profiles are updated, and a final preamble detection is made by comparing the updated profile values to a final threshold. Detections are performed on an incremental basis to meet latency requirements of the wireless network.
摘要:
A digital signal processor is provided having an instruction set with an xK function that uses a reduced look-up table. The disclosed digital signal processor evaluates an xK function for an input value, x, by computing Log(x) in hardware; multiplying the Log(x) value by K; and determining the xK function by applying an exponential function in hardware to a result of the multiplying step. One or more of the computation of Log(x) and the exponential function employ at least one look-up table having entries with a fewer number of bits than a number of bits in the input value, x.