摘要:
A hardware transactional memory 12, 14, 16, 18, 20 is provided within a multiprocessor 4, 6, 8, 10 system with coherency control and hardware transaction memory control circuitry 22 that serves to at least partially manage the scheduling of processing transactions in dependence upon conflict data 26, 28, 30. The conflict data characterizes previously encountered conflicts between processing transactions. The scheduling is performed such that a candidate processing transaction will not be scheduled if the conflict data indicates that one of the already running processing transactions has previously conflicted with the candidate processing transaction.
摘要:
A cache device is provided for use in a data processing apparatus to store data values for access by an associated master device. Each data value has an associated memory location in a memory device, and the memory device is arranged as a plurality of blocks of memory locations, with each block having to be activated before any data value stored in that block can be accessed. The cache device comprises regular access detection circuitry for detecting occurrence of a sequence of accesses to data values whose associated memory locations follow a regular pattern. Upon detection of such an occurrence of a sequence of accesses by the regular access detection circuitry, an allocation policy employed by the cache to determine a selected cache line into which to store a data value is altered with the aim of increasing a likelihood that when an evicted data value output by the cache is subsequently written to the memory device, the associated memory location resides within an already activated block of memory locations. Hence, by detecting regular access patterns, and altering the allocation policy on detection of such patterns, this enables a reuse of already activated blocks within the memory device, thereby significantly improving memory utilization, thereby giving rise to both performance improvements and power consumption reductions.
摘要:
A data processing apparatus and method are provided for implementing a replacement scheme for entries of a storage unit. The data processing apparatus has processing circuitry for executing multiple program threads including at least one high priority program thread and at least one lower priority program thread. A storage unit is then shared between the multiple program threads and has multiple entries for storing information for reference by the processing circuitry when executing the program threads. A record is maintained identifying for each entry whether the information stored in that entry is associated with a high priority program thread or a lower priority program thread. Replacement circuitry is then responsive to a predetermined event in order to select a victim entry whose stored information is to be replaced. To achieve this, the replacement circuitry performs a candidate generation operation to identify a plurality of randomly selected candidate entries, and then references the record in order to preferentially select as the victim entry a candidate entry whose stored information is associated with a lower priority program thread. This improves the performance of the high priority program thread(s) by preferentially evicting from the storage unit entries associated with lower priority program threads.
摘要:
Each of plural processing units has a cache, and each cache has indication circuitry containing segment filtering data. The indication circuitry responds to an address specified by an access request from an associated processing unit to reference the segment filtering data to indicate whether the data is either definitely not stored or is potentially stored in that segment. Cache coherency circuitry ensures that data accessed by each processing unit is up-to-date and has snoop indication circuitry whose content is derived from the already-provided segment filtering data. For certain access requests, the cache coherency circuitry initiates a coherency operation during which the snoop indication circuitry determines whether any of the caches requires a snoop operation. For each cache that does, the cache coherency circuitry issues a notification to that cache identifying the snoop operation to be performed.
摘要:
A data processing apparatus and method for generating access requests is provided. A bus master is provided which can operate either in a secure domain or a non-secure domain of the data processing apparatus, according to a signal received from external to the bus master. The signal is generated to be fixed during normal operation of the bus master. Control logic is provided which, when the bus master device is operating in a secure domain, is operable to generate a domain specifying signal associated with an access request generated by the bus master core indicating either secure or non-secure access, in dependence on either a default memory map or securely defined memory region descriptors. Thus, the bus master operating in a secure domain can generate both secure and non-secure accesses, without itself being able to switch between secure and non-secure operation.
摘要:
An integrated circuit 2 includes a transaction master 4 connected via interconnect circuitry 10 to a transaction slave 12. The transaction slave 12 generates a transfer-complete signal (R Last or B) to indicate completion of a data transfer (either a read or a write). When this transfer-complete signal has been received by the transaction master 4, then the transaction master 4 generates a complete-acknowledgement signal RACK, WACK, which is passed back to the transaction slave so as to acknowledge receipt of the transfer-complete signal.
摘要:
A data processing system in the form of an integrated circuit 2 includes a general purpose programmable processor 4 and a hardware accelerator 6. A shared memory management unit 10 provides memory management operations on behalf of both of the processor core 4 and the hardware accelerator 6. The processor 4 and the hardware accelerator 6 share a memory system 8. A first communication channel 12 between the processor 4 and the hardware accelerator 6 communicates at least control signals therebetween. A second communication channel 14 coupling the hardware accelerator 6 and the memory system 8 allows the hardware accelerator 6 to perform its own data access operations upon the memory system 8.
摘要:
A data processing apparatus 2 includes a programmable general purpose processor 10 coupled to a hardware accelerator 12. A memory system 14, 6, 8 is shared by the processor 10 and the hardware accelerator 12. Memory system monitoring circuitry 16 is responsive to one or more predetermined operations performed by the processor 10 upon the memory system 14, 6, 8 to generate a trigger to the hardware accelerator 12 for it to halt its processing operations and clean any data values held as temporary variables within registers 20 of the hardware accelerator back to the memory system 14, 6, 8.
摘要:
A data processing apparatus and method are provided for performing a cache lookup in an energy efficient manner. The data processing apparatus has at least one processing unit for performing operations and a cache having a plurality of cache lines for storing data values for access by that at least one processing unit when performing those operations. The at least one processing unit provides a plurality of sources from which access requests are issued to the cache, and each access request, in addition to specifying an address, further includes a source identifier indicating the source of the access request. A storage element is provided for storing for each source an indication as to whether the last access request from that source resulted in a hit in the cache, and cache line identification logic determines, for each access request, whether that access request is seeking to access the same cache line as the last access request issued by that source. The cache control logic is operable when handling an access request to constrain the lookup procedure to only a subset of the storage blocks within the cache if it is determined that the access request is to the same cache line as the last access request issued by the relevant source, and the storage element indicates that the last access request from that source resulted in a hit in the cache. This yields significant energy savings when accessing the cache.
摘要:
The present invention provides a data processing apparatus and method for handling corrupted data values. The method comprises the steps of: a) accessing a data value in a memory within a data processing apparatus; b) initiating processing of the data value within the data processing apparatus; c) whilst at least one of the steps a) and b) are being performed, determining whether the data value accessed is corrupted; and d) when it is determined that the data value is corrupted, disabling an interface used to propagate data values between the data processing apparatus and a device coupled to the data processing apparatus to prevent propagation of a corrupted data value to the device. When a data value is accessed, the data processing apparatus can begin processing of that data value and, hence, the performance of the data processing apparatus is not reduced. If it is determined that the data value which was accessed was corrupted or contains an error then the interface which couples the data processing apparatus with the device is disabled. Disabling the interface effectively quarantines any corrupted data values by preventing them from being propagated to the device. Preventing corrupted data values from being propagated to the device ensures that no change in state can occur in the device as a result of the corrupted data values.