摘要:
Apparatus, systems, methods, and articles may operate to restrict an order of processing of frames associated with a task context stored in at least one context cache memory location. The order of processing may be restricted by selectively locking the context for exclusive use by a selected lane in a multi-lane serial-attached small computer system interface (SAS) hardware protocol engine while the selected lane processes a selected one of the frames.
摘要:
A storage device is disclosed. The storage device includes a storage controller. The storage controller includes a direct memory access (DMA) Descriptor Manager (DM) to generate DMA descriptors by monitoring user data and a data integrity field (DIF) transferred between a host memory and a local memory based upon a function being performed.
摘要:
Methods of scheduling tasks in computer systems architectures are disclosed. In one aspect, a method may include comparing a connection address of a first node with a connection address of a second node, determining that the connection address of the first node matches the connection address of the second node, and scheduling tasks to the first and second nodes based, at least in part, on the determination. Apparatus to implement task scheduling, and systems including the apparatus are also disclosed.
摘要:
A frame based data transfer device includes a receive frame parser, a receive frame processor, and a DMA engine. The receive frame parser receives a frame, stores framing information from the frame in a receive header queue, and stores an information unit from the frame in an information unit buffer. The receive frame processor is coupled to the receive header queue. The receive frame processor reads a transport layer task context as determined by a tag field in the framing information, determines how to handle the frame from the transport layer task context and framing information, generates a DMA descriptor, and stores an updated transport layer task context. The DMA engine is coupled to the information unit buffer and receive frame processor. The DMA engine reads a DMA task context, transfers the information unit to a destination memory by processing the DMA descriptor, and stores an updated DMA task context.
摘要:
According to one embodiment, a storage device is disclosed. The storage device includes a port having one or more lanes and a direct memory access (DMA) Descriptor Manager (DM). The DM generates and tracks completion of descriptors. The DM includes a first completion lookup table to track one or more fields of an input/output (I/O) context received at a first lane.
摘要:
Methods of scheduling tasks in computer systems architectures are disclosed. In one aspect, a method may include comparing a connection address of a first node with a connection address of a second node, determining that the connection address of the first node matches the connection address of the second node, and scheduling tasks to the first and second nodes based, at least in part, on the determination. Apparatus to implement task scheduling, and systems including the apparatus are also disclosed.
摘要:
A device includes a task context controller, at least one transport engine connected to the task context controller, and at least one comparator connected to the transport engine. The comparator to compare a data offset from a receive frame with a current data offset and a result is used to determine frame processing order.
摘要:
Disclosed is a target port that implements a transport layer retry (TLR) mechanism. The target port includes a circuit having a transmit transport layer and receive transport layer in which both the transmit and receive transport layers are coupled to a link. A transmit protocol processor of the transmit transport layer controls a TLR mechanism in a serialized protocol. A receive protocol processor of the receive transport layer is coupled to the transmit transport layer and likewise controls the TLR mechanism in the serialized protocol.
摘要:
A device includes a task context controller, at least one transport engine connected to the task context controller, and at least one comparator connected to the transport engine. The comparator to compare a data offset from a receive frame with a current data offset and a result is used to determine frame processing order.
摘要:
A method and apparatus for managing task context are provided. Upon initialization, a protocol engine provides context resources available for processing tasks to a task issuer. Based on available context resources, the task issuer creates and manages a free list of available task context indices and assigns an index to a task prior to storing task context in a context memory accessible to both the task issuer and the protocol engine and issuing the task to the protocol engine.