摘要:
Provided are techniques for writing doorbell information. In accordance with certain techniques, one or more protection domains are created. One or more data structures are created, wherein each of the data structures is associated with at least one protection domain. One of the data structures is updated. A doorbell structure address for a doorbell structure associated with the updated data structure is computed. Doorbell information is written at the computed doorbell structure address. In accordance with certain other techniques, doorbell information is received. A doorbell structure address is decoded from the doorbell information. A first protection domain identifier is determined from the doorbell structure address. A resource context of a data structure is determined from the doorbell information. The resource context at the doorbell address is read to determine a second protection domain identifier. The first protection domain identifier and the second protection domain identifier are compared to determine whether to update the resource context of the doorbell structure. Other embodiments are described and claimed.
摘要:
In one embodiment, a method is provided. The method of this embodiment provides determining if a management queue can be created, and if a management queue can be created, allocating virtually contiguous memory to a management queue associated with a device, registering the management queue, and creating a management queue context.
摘要:
Provided are techniques for interrupt processing. An Input/Output device determines that an event has occurred. The Input/Output device determines a state of an event data structure. The Input/Output device writes an event entry into the event data structure in response to determining that the event has occurred. After writing the event entry, the Input/Output device determines whether to generate an interrupt or not based on the state of the event data structure. Additionally provided are techniques for interrupt processing in which an I/O device driver determines that an interrupt has occurred. The I/O device driver reads an event entry in an event data structure in response to determining that the interrupt has occurred. The I/O device driver updates a state of a structure state indicator to enable/disable interrupts.
摘要翻译:提供了用于中断处理的技术。 输入/输出设备确定事件已发生。 输入/输出设备确定事件数据结构的状态。 响应于确定事件已经发生,输入/输出设备将事件条目写入事件数据结构。 在写入事件条目之后,输入/输出设备根据事件数据结构的状态确定是否产生中断。 另外提供了用于中断处理的技术,其中I / O设备驱动器确定已经发生中断。 I / O设备驱动程序响应于确定已经发生中断而在事件数据结构中读取事件条目。 I / O设备驱动程序更新结构状态指示灯的状态以启用/禁用中断。
摘要:
Provided are a method, system, and program for managing memory options for a device such as an I/O device. Private addresses provided by logic blocks within the device may be transparently routed to either an optional external memory or to system memory, depending upon which of the optional memories the private address has been mapped.
摘要:
Provided are a method, system, and program for updating a cache in which, in one aspect of the description provided herein, changes to data structure entries in the cache are selectively written back to the source data structure table maintained in the host memory. In one embodiment, translation and protection table (TPT) contents of an identified cache entry are written to a source TPT in host memory as a function of an identified state transition of the cache entry in connection with a memory operation and the memory operation. Other embodiments are described and claimed.
摘要:
Provided are techniques for interrupt processing. An Input/Output device determines that an event has occurred. The Input/Output device determines a processor identifier and determines an event data structure identifier for an event data structure into which data for the event is stored using the processor identifier. The Input/Output device also determines a vector identifier for an interrupt message vector into which an interrupt message for the event is written. Then, interrupt message data is written to the interrupt message vector to generate an interrupt.
摘要:
A method and system for transmitting packets. Packets may be transmitted when a protocol control block is copied from a host processing system to a network protocol offload engine. Message information that contains packet payload addresses may be provided to the network protocol offload engine to generate a plurality of message contexts in the offload engine. With the message contexts, protocol processing may be performed at the offload engine while leaving the packet payload in the host memory. Thus, packet payloads may be transmitted directly from the host memory to a network communication link during transmission of the packets by the offload engine. Other embodiments are also described.
摘要:
In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
摘要:
Provided are a method, system, and program for caching a virtualized data structure table. In one embodiment, an input/output (I/O) device has a cache subsystem for a data structure table which has been virtualized. As a consequence, the data structure table cache may be addressed using a virtual address or index. For example, a network adapter may maintain an address translation and protection table (TPT) which has virtually contiguous data structures but not necessarily physically contiguous data structures in system memory. TPT entries may be stored in a cache and addressed using a virtual address or index. Mapping tables may be stored in the cache as well and addressed using a virtual address or index.
摘要:
According to one embodiment, an apparatus is disclosed. The apparatus includes a port having a plurality of lanes, a plurality of protocol engines. Each protocol engine is associated with one of the plurality of lanes, and processes tasks to be forwarded to a plurality of remote nodes. The apparatus also includes a first port task scheduler (PTS) to manage the tasks to be forwarded to the one or more of the plurality of protocol engines. The first PTS includes a register to indicate which of the plurality of protocol engines the first PTS is to support.