Abstract:
The present disclosure relates generally to electronic interconnects including one or more switches and, more particularly, to delay bound determination for electronic interconnects.
Abstract:
A master device has a buffer for storing data transferred from, or to be transferred to, a memory system. Control circuitry issues from time to time a group of one or more transactions to request transfer of a block of data between the memory system and the buffer. Hardware or software mechanism can be provided to detect at least one memory load parameter indicating how heavily loaded the memory system is, and a group size of the block of data transferred per group can be varied based on the memory load parameter. By adapting the size of the block of data transferred per group based on memory system load, a better balance between energy efficiency and quality of service can be achieved.
Abstract:
Examples of the present disclosure relate to an apparatus comprising interface circuitry to receive memory access commands directed to a memory device, each memory access command specifying a memory address to be accessed. The apparatus comprises scheduler circuitry to store a representation of a plurality of states accessible to the memory device and, based on the representation, determine an order for the received memory access commands. The apparatus comprises dispatch circuitry to receive the received memory access commands from the scheduler circuitry and issue the received memory access commands, in the determined order, to be performed by the memory device.
Abstract:
There is provided an apparatus comprising scheduling circuitry, which selects a task as a selected task to be performed from a plurality of queued tasks, each having an associated priority, in dependence on the associated priority of each queued task. Escalating circuitry increases the associated priority of each of the plurality of queued tasks after a period of time. The plurality of queued tasks comprises a time-sensitive task having an associated deadline and in response to the associated deadline being reached, the scheduling circuitry selects the time-sensitive task as the selected task to be performed.
Abstract:
An apparatus and method are provided for opportunistically performing scrubbing operations on a memory device. The apparatus is used for accessing the memory device in response to access requests issued by at least one requesting device and comprises interface circuitry that is configured to access the memory device in response to the access requests. The apparatus also comprises activity monitoring circuitry which generates memory access activity data that results from memory access activity between the interface circuitry and the memory device. Scrubbing circuitry is also included and is configured to issue scrubbing access requests to perform the scrubbing operations, the scrubbing access requests being issued in response to the memory access activity data indicating a trigger condition. The above apparatus allows scrubbing access requests to be issued taking into account actual memory access activity between the interface circuitry and the memory device, thereby allowing the access requests to be issued opportunistically in such a way that the performance cost/system power consumption necessary to achieve a particular reliability can be reduced compared to known techniques.
Abstract:
To protect the integrity of data stored in a protected area of memory, data in the protected area of memory is retrieved in data blocks and an authentication code is associated with a memory granule contiguously comprising a first data block and a second data block. Calculation of the authentication code comprises a cryptographic calculation based on a first hash value determined from the first data block and a second hash value determined from the second data block. A hash value cache is provided to store hash values determined from data blocks retrieved from the protected area of the memory. When the first data block and its associated authentication code are retrieved from memory, a lookup for the second hash value in the hash value cache is performed, and a verification authentication code is calculated for the memory granule to which that data block belongs. The integrity of the first data block is contingent on the verification authentication code matching the retrieved authentication code.
Abstract:
A cache to provide data caching in response to data access requests from at least one system device, and a method operating such a cache, are provided. Allocation control circuitry of the cache is responsive to a cache miss to allocate an entry of the multiple entries in the data caching storage circuitry in dependence on a cache allocation policy. Quality-of-service monitoring circuitry is responsive to a quality-of-service indication to modify the cache allocation policy with respect to allocation of the entry for the requested data item. The behaviour of the cache, in particular regarding allocation and eviction, can therefore be modified in order to seek to maintain a desired quality-of-service for the system in which the cache is found.
Abstract:
An apparatus includes encoding circuitry to generate code words for storage in a memory device. Decoding circuitry is responsive to a read transaction to decode one or more code words read from the memory device in order to generate read data for outputting in response to the read transaction. The decoding circuitry comprises error correction circuitry configured, for each read code word, to perform an error correction process to detect and correct errors in up to P symbols of the code word, where P is dependent on the number of ECC symbols in the code word. Error tracking circuitry determines error quantity indication data indicative of the errors detected by the error correction circuitry, and in response to the error quantity indication data indicating that an error threshold condition has been reached, the apparatus transitions from a normal mode of operation to a safety mode of operation.
Abstract:
A method and system for an enhanced weighted fair queuing technique for a resource are provided. A plurality of request streams is received at a requestor, each request stream including request messages from a process executing on the requestor. The request messages of each request stream are apportioned to an input queue associated with the request stream; each input queue has a weight. A virtual finish time is determined for each request message based, at least in part, on the weights of the input queues. A sequence of request messages is determined based, at least in part, on the virtual finish times of the request messages. The sequence of request messages is enqueued into an output queue. The sequence of request messages is sent to a resource, over a connection, which provides a service for each process.
Abstract:
A method and system for an enhanced weighted fair queuing technique for a resource are provided. A plurality of request streams is received at a requestor, each request stream including request messages from a process executing on the requestor. The request messages of each request stream are apportioned to an input queue associated with the request stream; each input queue has a weight. A virtual finish time is determined for each request message based, at least in part, on the weights of the input queues. A sequence of request messages is determined based, at least in part, on the virtual finish times of the request messages. The sequence of request messages is enqueued into an output queue. The sequence of request messages is sent to a resource, over a connection, which provides a service for each process.