摘要:
A network transport layer accelerator accelerates processing of packets so that packets can be forwarded at wire-speed. To accelerate processing of packets, the accelerator performs pre-processing on a network transport layer header encapsulated in a packet for a connection and performs in-line network transport layer checksum insertion prior to transmitting a packet. A timer unit in the accelerator schedules processing of the received packets. The accelerator also includes a free pool allocator which manages buffers for storing the received packets and a packet order unit which synchronizes processing of received packets for a same connection.
摘要:
A network application executing on a host system provides a list of application buffers in host memory stored in a queue to a network services processor coupled to the host system. The application buffers are used for storing data transferred on a socket established between the network application and a remote network application executing in a remote host system. Using the application buffers, data received by the network services processor over the network is transferred between the network services processor and the application buffers. After the transfer, a completion notification is written to one of the two control queues in the host system. The completion notification includes the size of the data transferred and an identifier associated with the socket. The identifier identifies a thread associated with the transferred data and the location of the data in the host system.
摘要:
A network application executing on a host system provides a list of application buffers in host memory stored in a queue to a network services processor coupled to the host system. The application buffers are used for storing data transferred on a socket established between the network application and a remote network application executing in a remote host system. Using the application buffers, data received by the network services processor over the network is transferred between the network services processor and the application buffers. After the transfer, a completion notification is written to one of the two control queues in the host system. The completion notification includes the size of the data transferred and an identifier associated with the socket. The identifier identifies a thread associated with the transferred data and the location of the data in the host system.
摘要:
A network application executing on a host system provides a list of application buffers in host memory stored in a queue to a network services processor coupled to the host system. The application buffers are used for storing data transferred on a socket established between the network application and a remote network application executing in a remote host system. Using the application buffers, data received by the network services processor over the network is transferred between the network services processor and the application buffers. After the transfer, a completion notification is written to one of the two control queues in the host system. The completion notification includes the size of the data transferred and an identifier associated with the socket. The identifier identifies a thread associated with the transferred data and the location of the data in the host system.
摘要:
In one embodiment, a system includes a packet reception unit. The packet reception unit is configured to receive a packet, create a header indicating scheduling of the packet in a plurality of cores and concatenate the header and the packet. The header is based on the content of the packet. In one embodiment, a system includes a transmit silo configured to store a multiple fragments of a packet, the fragments having been sent to a destination and the transmit silo having not received an acknowledgement of receipt of the fragments from the destination. The system further includes a restriction verifier coupled with the transmit silo. The restriction verifier is configured to receive the fragments and determine whether the fragments can be sent and stored in the transmit silo.
摘要:
In one embodiment, a system includes a packet reception unit. The packet reception unit is configured to receive a packet, create a header indicating scheduling of the packet in a plurality of cores and concatenate the header and the packet. The header is based on the content of the packet. In one embodiment, a system includes a transmit silo configured to store a multiple fragments of a packet, the fragments having been sent to a destination and the transmit silo having not received an acknowledgement of receipt of the fragments from the destination. The system further includes a restriction verifier coupled with the transmit silo. The restriction verifier is configured to receive the fragments and determine whether the fragments can be sent and stored in the transmit silo.
摘要:
In an embodiment, authenticated hardware and authenticated software are cryptographically binded using symmetric and asymmetric cryptography. Cryptographically binding the hardware and software ensures that original equipment manufacturer (OEM) hardware will only run OEM software. Cryptographically binding the hardware and software protects the OEM binary code so it will only run on the OEM hardware and cannot be replicated or altered to operate on unauthorized hardware. This cryptographic binding technique is referred to herein as secure software and hardware association (SSHA).
摘要:
In an embodiment, authenticated hardware and authenticated software are cryptographically binded using symmetric and asymmetric cryptography. Cryptographically binding the hardware and software ensures that original equipment manufacturer (OEM) hardware will only run OEM software. Cryptographically binding the hardware and software protects the OEM binary code so it will only run on the OEM hardware and cannot be replicated or altered to operate on unauthorized hardware. This cryptographic binding technique is referred to herein as secure software and hardware association (SSHA).
摘要:
A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.
摘要:
A method and apparatus for optimizing IPsec processing by providing execution units with windowing data during prefetch and managing coherency of security association data by management of security association accesses. Providing execution units with windowing data allows initial parallel processing of IPsec packets. The security association access ordering apparatus serializes access to the dynamic section of security association data according to packet order arrival while otherwise allowing parallel processing of the IPsec packet by multiple execution units in a security processor.