摘要:
Apparatuses, methods, and systems for dynamic resource allocation based on quality-of-service prediction are disclosed. In embodiments, an apparatus includes quality-of-service prediction circuitry and a resource controller. The quality-of-service prediction circuitry is to make quality-of-service predictions using a model based at least in part on at least one performance counter measurements and at least one quality-of-service measurement. The resource controller is to allocate one or more shared resources based on the quality-of-service predictions and architectural performance counter measurements.
摘要:
An adaptive baseband processing system having a scalable architecture to allow scaling to support adaptive transmission and receive, at different granularity, channel vs. subchannel, for different number of antennas and/or users, including their components, are described herein. In various embodiments, the components include a front-end processor, an AAS processor and a back-end processor.
摘要:
An arrangement is provided for performing MD5 digesting. The arrangement includes apparatuses and methods that pipeline the MD5 digesting process to produce a 128 bit digest for an input message of any arbitrary length.
摘要:
An arrangement is provided for performing RC4 ciphering. The arrangement includes apparatuses and methods that pipeline generation of a key stream based on a byte state array, called the S-box, which is initially generated from a secret key shared by a receiver and a transmitter in a network system. The S-box is stored in a storage device which may be a register file with two read ports and one write port. A cache is used to store a number of bytes read from the S-box storage device.
摘要:
A method and apparatus is described for processing of network data packets by a network processor having cipher processing cores and authentication processing cores which operate on data within the network data packets, in order to provide a one-pass ciphering and authentication processing of the network data packets.
摘要:
A method of executing instructions on a processor includes, receiving a first condition code produced by executing a first instruction during a first clock cycle on an array of engines included in the processor, receiving a second condition code produced by executing a second instruction during a second clock cycle on the array of engines included in the processor, and executing a logical operator on the first and second condition codes during the second clock cycle on the array of engines included in the processor.
摘要:
Embodiments of systems and methods for distributed control relay architecture are described herein. Other embodiments may be described and claimed.
摘要:
An arrangement is provided for performing modular exponentiations. A modular exponentiation may be performed by using multiple Montgomery multiplications. A Montgomery multiplication comprises a plurality of iterations of basic operations (e.g., carry-save additions), and is performed by a Montgomery multiplication engine (MME). Multiple MMEs of smaller sizes may be chained together to perform modular exponentiations of larger sizes. Additionally, a single MME of a smaller size may be scheduled to perform modular exponentiations of larger sizes. Moreover, the process of performing a Montgomery multiplication may be pipelined both horizontally and vertically. Furthermore, processes of performing two Montgomery multiplications may be interleaved and performed by the same MME or chained MMEs.
摘要:
Methods and apparatus, including computer program products, implementing techniques for monitoring a state of a device of a switched fabric network, the device including on-chip queues to store queue descriptors and a data buffer to store data packets, each queue descriptor having a corresponding data packet; detecting a first trigger condition to transition the device from a first state to a second state; and recovering space in the data buffer in response to the first trigger condition detecting, the recovering comprising selecting one or more of the on-chip queues for discard, and removing the data packets corresponding to queue descriptors in the selected one or more on-chip queues from the data buffer.