Abstract:
An application programming interface implements and manages isochronous and asynchronous data transfer operations between an application and a bus structure. During an asynchronous transfer the API includes the ability to transfer any amount of data between one or more local data buffers within the application and a range of addresses over the bus structure using one or more asynchronous transactions. An automatic transaction generator may be used to automatically generate the transactions necessary to complete the data transfer. The APT also includes the ability to transfer data between the application and another node on the bus structure isochronously over a dedicated channel. During an isochronous data transfer, a buffer management scheme is used to manage a linked list of data buffer descriptors. This linked descriptor list can form a circular list of buffers and include a forward pointer to the next buffer in the list and a backward pointer to the previous buffer in the list for each buffer. The linked descriptor list may all form a linear list to which the application can append additional buffers or remove existing buffers from the list. During isochronous transfers of data, the API provides implementation of a resynchronisation event in the stream of data allowing for resynchronisation by the application to a specific point within the data. Implementation is also provided for a callback routine for each buffer in the list which calls the application at a predetermined point during the transfer of data.
Abstract:
This invention relates to a design of an efficient buffer management model in order to increase the efficiency of data exchange between two process threads, - e.g. when implementing a network transport protocol stack. This invention proposes to use an interconnected system of different kinds of memory buffers (100, 101, 102), implemented as asynchronous read/write ring buffers ARWRB. These buffers are organized in a way, in which data can be stored into the buffer or fetched from the buffer essentially avoiding synchronization means like mutexes or semaphores. In contrast to the conventional buffer management model, three ring buffers, namely send ring buffer (100), send token ring buffer (101) and receive ring buffer (102), are used within the transport protocol stack.
Abstract translation:本发明涉及一种高效的缓存器管理模型的设计,以增加两个处理线程之间的数据交换的效率 - E. G. 当实现一个网络传输协议栈。 本发明提出了作为异步读/写环形缓冲器ARWRB在不同种类的存储器缓冲器(100,101,102),来实现的互连的系统中使用。 这些缓冲器的方式,在该数据可被存储到从基本上避免同步缓存器中的缓冲或取出组织等手段或互斥信号量。 与此相反的常规缓冲器管理模式,三个环形缓冲器,即发送环形缓冲器(100),发送令牌环缓冲器(101)和接收环形缓冲区(102)的传送协议栈内被使用。
Abstract:
A method of ensuring that data sent to a handheld wireless communications device is written to non-volatile memory is disclosed. In a device, where data is initially written to a first volatile memory and then written to a second volatile memory before being written from the second volatile memory to a non-volatile memory, software code is implemented that causes the writing of the data to non-volatile memory concurrently with the writing of the data to the second volatile memory. The software code may incorporate operating system commands (such as Windows OS).
Abstract:
Roughly described, a network interface device receiving data packets from a computing device for transmission onto a network, the data packets having a certain characteristic, transmits the packet only if the sending queue has authority to send packets having that characteristic. The data packet characteristics can include transport protocol number, source and destination port numbers, source and destination IP addresses, for example. Authorizations can be programmed into the NIC by a kernel routine upon establishment of the transmit queue, based on the privilege level of the process for which the queue is being established. In this way, a user process can use an untrusted user-level protocol stack to initiate data transmission onto the network, while the NIC protects the remainder of the system or network from certain kinds of compromise.
Abstract:
A system and method for data transmission comprise at least one wired connection (4) between a server (1), operating on the TCP/IP protocol and an exchange (3) and at least one mono-directional radio connection between the exchange (3) and a client (2), operating with the TCP/IP protocol. On a transmission loss of a data packet in a mono-directional radio connection (5), a temporary additional connection (24), between the client (2) and the exchange (3), is established on the IP level, for a repeated transmission of the lost data packet. Transmission losses can be directly recognised by means of the above system and a repeat transmission initiated. The system generates no additional run-time for data buffering with an error-free transmission.
Abstract:
A method of managing a free list and ring data structure, which may be used to store journaling information, by storing and modifying information describing a structure of the free list or ring data structure in a cache memory that may also be used to store information describing a structure of a queue of buffers.
Abstract:
A packet multiplexing apparatus is presented for multiplexing packets to be transmitted from a number of user facilities to a local service node in such a way to assure equal access to one output port for all the users. The apparatus is provided with input ports (1-0~1-(N-1)) for inputting a packet in a respective input port; buffer memories (3-0~3-(n-1)) provided for each input port for temporary storage of a packet; an output signal transmission circuit (4) for retrieving a packet from each buffer memory in a specific sequence; an output port (2) for transmitting packets output from the output signal transmission circuit; and a retrieval sequencing section (10 or 20 or 30) for controlling the specific sequence by changing a retrieving order of packets from buffer memories for each complete round of packet retrieval so that a frequency of the retrieving order for each input port is uniformly shared by the input ports.
Abstract:
A circular buffer, i.e., a chain of buffers forming a circle, is provided for managing packet loss detecting in Internet streaming. The detection latency is determined by the size of the buffer chain, which can be dynamically adapted to network conditions and application requirements. The present invention can achieve reasonable detection accuracy.
Abstract:
A method and system for routing network-based data arranged in frames is disclosed. A host processor analyzes transferred bursts of data and initiates an address and look up algorithm for dispatching the frame to a desired destination. A shared system memory existing between a network device, e.g., an HDLC controller, working in conjunction with the host processor, receives data, including any preselected address fields. The network device includes a plurality of ports. Each port includes a FIFO receive memory for receiving at least a first portion of a frame. The first portion of the frame includes data having the preselected address fields. A direct memory access unit transfers a burst of data from the FIFO receive memory to the shared system memory. A communications processor selects the amount of data to be transferred from the FIFO receive memory based on the desired address fields to be analyzed by the host processor.
Abstract:
A variable-size jitter buffer is used to store information associated with a voice signal, facsimile signal or other received signal in a receiver of a packet-based communication system. The receiver determines an appropriate adjustment time for making an adjustment to the size of the buffer based at least in part on a result of a signal detection operation performed on the received signal. For example, in the case of a received voice signal, the determined adjustment time may be a time at which a state machine associated with a speech detector is in a "no speech" state. If the actual buffer size at the determined adjustment time is not within a designated range of a target computed at least in part based on one or more jitter measurements, the buffer size is adjusted at the determined adjustment time, e.g., by an amount representative of the difference between the actual buffer size and the target. The invention provides low-delay and low-complexity jitter buffering particularly well suited for use in an Internet Protocol (IP) receiver of a voice-over-IP system.