摘要:
A method for controlling access to computer memory, the method including communicating work queue elements with an application layer and with a verb layer, and indicating completion of the work queue elements, wherein both the application layer and the verb layer are capable of checking if at least one of the work queue elements is completed, independently of each other.
摘要:
A method for receiving data in a network acceleration architecture for use with TCP (transport control protocol), iSCSI (Internet Small Computer System Interface) and RDMA (Remote Direct Memory Access) over TCP, including providing a hardware acceleration engine, called a streamer, adapted for communication with and processing data from a consumer application in a system that supports TCP, iSCSI and RDMA over TCP, providing a software protocol processor adapted for carrying out TCP implementation, the software control processor being called a TCE (TCP Control Engine), wherein the streamer and the TCE are adapted to operate asynchronously and independently of one another, and receiving an inbound TCP segment with the streamer.
摘要翻译:一种用于在TCP(传输控制协议),iSCSI(因特网小型计算机系统接口)和RDMA(远程直接存储器访问)TCP上使用的网络加速架构中接收数据的方法,包括提供称为流传输器的硬件加速引擎, 适用于与支持TCP,iSCSI和RDMA over TCP的系统中的消费者应用程序进行通信和处理数据,提供适于执行TCP实现的软件协议处理器,称为TCE(TCP控制引擎)的软件控制处理器, 其中所述流送器和所述TCE适于彼此异步地且独立地操作,并且与所述流送器接收入站TCP段。
摘要:
A network acceleration architecture for use with TCP, iSCSI and/or RDMA over TCP, including a hardware acceleration engine adapted for communication with and processing data from a consumer application in a system that supports TCP, iSCSI and RDMA over TCP, a software protocol processor adapted for carrying out TCP implementation, and an asynchronous dual-queue interface for exchanging information between the hardware acceleration engine and the software protocol processor, wherein the hardware acceleration engine and the software protocol processor are adapted to operate asynchronously and independently of one another.
摘要翻译:一种用于TCP,iSCSI和/或RDMA over TCP的网络加速架构,包括适用于与支持TCP,iSCSI和RDMA over TCP的系统中的消费者应用程序通信和处理数据的硬件加速引擎,软件协议处理器 适用于执行TCP实现,以及用于在硬件加速引擎和软件协议处理器之间交换信息的异步双队列接口,其中硬件加速引擎和软件协议处理器适于彼此异步地且独立地操作。
摘要:
A method for controlling access to computer memory, the method including communicating work queue elements with an application layer and with a verb layer, and indicating completion of the work queue elements, wherein both the application layer and the verb layer are capable of checking if at least one of the work queue elements is completed, independently of each other.
摘要:
A virtualized system including a processing sub-system including a plurality of partitions and operating systems and a virtualization layer, each partition running its own operating system and having assigned its own partition ID, and an I/O emulation entity connected to the processing sub-system through a bus and connected to a network to which is connected at least one computer that hosts at least one remote I/O peripheral, the I/O emulation entity being adapted to execute an I/O-emulation transaction for any of the operating systems in accordance with that operating system's partition-ID.
摘要:
A method and a network architecture for isolation of the network protocol stack from the operating system are provided. The network architecture may include an IO interface arranged to receive and transfer messages from/to the consumer application. The messages may carry high-level generic network device commands, targeted for execution by a particular protocol layer, to which protocol the messages pertain. The network architecture may further included an isolated network protocol stack arranged to process the high-level commands for execution and further arranged to generate device-specific command from the high-level commands, and an IO component arranged to execute the device-specific commands.
摘要:
A method, system and computer program product that allows a System Image within a multiple System Image Virtual Server to maintain isolation from the other system images while directly exposing a portion, or all, of its associated System Memory to a shared PCI Adapter without the need for each I/O operation to be analyzed and verified by a component trusted by the LPAR manager.
摘要:
A method, computer program product, and distributed data processing system that allows a single physical I/O adapter to validate that a memory mapped I/O address referenced by an incoming I/O operation is associated with a virtual host that initiated the incoming memory mapped I/O operation is provided. Specifically, the present invention is directed to a mechanism for sharing a PCI family I/O adapter and, in general, any I/O adapter that uses a memory mapped I/O interface for communications. A mechanism is provided that allows a single physical I/O adapter to validate that a memory mapped I/O address referenced by an incoming memory mapped I/O operation used to initiate an I/O transaction is associated with a virtual host that initiated the incoming memory mapped I/O operation.
摘要:
A method, computer program product, and distributed data processing system for directly sharing an I/O adapter that directly supports adapter virtualization and does not require an LPAR manager or other intermediary to be invoked on every I/O transaction is provided. The present invention also provides a method, computer program product, and distributed data processing system for directly creating and initializing a virtual adapter and associated resources on a physical adapter, such as a PCI, PCI-X, or PCI-E adapter. Specifically, the present invention is directed to a mechanism for sharing conventional PCI (Peripheral Component Interconnect) I/O adapters, PCI-X I/O adapters, PCI-Express I/O adapters, and, in general, any I/O adapter that uses a memory mapped I/O interface for communications. A mechanism is provided for directly creating and initializing a virtual adapter and associated resources within a physical adapter, such as a PCI, PCI-X, or PCI-E adapter. Additionally, each virtual adapter has an associated set of host side resources, such as memory addresses and interrupt levels, and adapter side resources, such as adapter memory addresses and processing queues, and each virtual adapter is isolated from accessing the host side resources and adapter resources that belong to another virtual or physical adapter.
摘要:
A method of offloading, from a host data processing unit (205), the generation of data corruption-detection digests for iSCSI PDUs to be transmitted as TCP segments over respective TCP connections. An iSCSI layer processing software (310) executed by the host data processing unit provides a command descriptor list (320) containing command descriptors adapted to identify portions of at least one iSCSI PDU to be transmitted, and data corruption-detection digest descriptors (CRC DESC(PDUa); CRC DESC(PDUb)), each one associated with a respective PDU data corruption-detection digest. An iSCSI processing offload engine (223) transmits the iSCSI PDU over the respective TCP connection, based on the descriptors in the command descriptor list; during the transmission, the iSCSI PDU data corruption-detection digest are calculated, and the calculated data corruption-detection digest is saved in the corresponding data corruption-detection digest descriptor in the command descriptor list.