摘要:
A processor 6 is provided with a plurality of hardware resources, such as performance monitors 12 and context pointers 18. Boundary indicating circuitry 14, 20 stores a boundary value which is programmable and which indicates a boundary position dividing the hardware resources into a first portion and a second portion. Resource control circuitry 16, 22 controls access to the hardware resources such that when program execution circuitry 8 is executing a first program it is responsive to a query as to how many off said plurality of hardware resources are present to return a first value whereas when the program execution circuitry is executing a second program it responds to such a query by returning a value corresponding to those hardware resources within the second portion.
摘要:
A branch prediction mechanism 16, 18 within a multithreaded processor having hardware scheduling logic 6, 8, 10, 12 uses a shared global history table 18 which is indexed by respective branch history registers 20, 22 for each program thread. Different mappings are used between preceding branch behavior and the prediction value stored within respective branch history registers 20, 22. These different mappings may be provided by inverters placed into the shift in paths for the branch history registers 20, 22 or by adders 40, 42 or in some other way. The different mappings help to equalise the probability of use of the particular storage locations within the global history table 18 such that the plurality of program threads are not competing excessively for the same storage locations corresponding to the more commonly occurring patterns of preceding branch behavior.
摘要:
A data processing apparatus is provided wherein processing circuitry executes multiple program threads including at least one high priority thread and at least one lower priority thread. Instructions required by the threads are retrieved from a cache memory hierarchy comprising multiple cache levels. The cache memory hierarchy includes a bypass path for omitting a predetermined level of the cache memory hierarchy when performing a lookup procedure for a required instruction and for bypassing said predetermined level of the cache memory hierarchy when returning said required instruction to said processing circuitry. The bypass path is used by default when the requested instruction is for a lower priority thread.
摘要:
A multi-threaded in-order superscalar processor 2 includes an issue stage 12 including issue circuitry 22, 24 for selecting instructions to be issued to execution units 14, 16 in dependence upon a currently selected issue policy. A plurality of different issue policies are provided by associated different policy circuitry 28, 30, 32 and a selection between which of these instances of the policy circuitry 28, 30, 32 is active is made by policy selecting circuitry 34 in dependence upon detected dynamic behaviour of the processor 2.
摘要:
A data processing apparatus and method are provided for generating prediction data. The data processing apparatus has processing circuitry for performing processing operations including high priority operations and low priority operations, events occurring during performance of those processing operations. Prediction circuitry is responsive to a received event to generate prediction data used by the processing circuitry. The prediction circuitry includes a history storage having a plurality of counter entries for storing count values, and index circuitry for identifying, dependent on the received event, at least one counter entry and for causing the history storage to output the count value stored in that at least one counter entry, with the prediction data being derived from the output count value. Further update control circuitry is provided for modifying at least one count value stored in the history storage in response to update data generated by the processing circuitry. The update control circuitry has a priority dependent modification mechanism such that the modification to the at least one count value is dependent on the priority of the processing operation with which that update data is associated. As a result, the prediction data output for a received event associated with a high priority operation is more accurate than the prediction data output for a received event associated with a low priority operation.
摘要:
A data processing apparatus and method are provided for implementing a replacement scheme for entries of a storage unit. The data processing apparatus has processing circuitry for executing multiple program threads including at least one high priority program thread and at least one lower priority program thread. A storage unit is then shared between the multiple program threads and has multiple entries for storing information for reference by the processing circuitry when executing the program threads. A record is maintained identifying for each entry whether the information stored in that entry is associated with a high priority program thread or a lower priority program thread. Replacement circuitry is then responsive to a predetermined event in order to select a victim entry whose stored information is to be replaced. To achieve this, the replacement circuitry performs a candidate generation operation to identify a plurality of randomly selected candidate entries, and then references the record in order to preferentially select as the victim entry a candidate entry whose stored information is associated with a lower priority program thread. This improves the performance of the high priority program thread(s) by preferentially evicting from the storage unit entries associated with lower priority program threads.
摘要:
A data processing apparatus is provided comprising processing circuitry for performing data processing operations, a set associative storage device for storing data values for access by the processing circuitry when performing data processing operations, error detection circuitry for performing, for each access to the storage device, an error detection operation on the data value accessed, and maintenance circuitry associated with the storage device for performing one or more maintenance operations. The processing circuitry is arranged to issue an error detection maintenance request to the maintenance circuitry specifying at least one specific physical location within the storage device, and the maintenance circuitry is responsive to the error detection maintenance request to perform at least one dummy access to the at least one specific physical location within the storage device and to provide the processing circuitry with error status information derived from the error detection operation performed by the error detection circuitry in respect of said at least one dummy access.
摘要:
A data processing system 2 incorporates a plurality of processing elements 4, 6, 8, 10, 12 which may exchange broadcast maintenance messages, such as local instruction cache invalidation messages. Behaviour modification circuitry disposed either at the request generator, the request receiver or on route serves to modify the broadcast maintenance requests if one or more predetermined conditions are met. The predetermined conditions may include a message rate being exceeded, a message being redundant due to a preceding message, etc.
摘要:
An apparatus for processing data 2 is provided with a time-to-digital converter 18 which serves to measure signal processing delay through one or more signal paths through a processing stage. This measured delay generates a delay value representing a plurality of instances of the signal processing delay which have been measured. Analysis is performed under software control to estimate a worst case signal processing delay through the processing stage based upon the delay values which have been generated. An adjustment of the operating parameters, such as supply voltage and clock frequency, of the apparatus is made to provide a timing margin through the processing stage sufficient to satisfy the worst case signal processing delay which has been estimated without an excessive margin.
摘要:
A data processor for processing data in a secure mode having access to secure data that is not accessible to the data processor when processing data in the non-secure mode. A further processing device for performing a task in response to a request from the data processor issued from the non-secure mode. The further processing device including a secure data store not accessible to processes running on the data processor when in the non-secure mode. Prior to issuing requests, the data processor in the secure mode performs a set up operation on the further data processing device storing secure data in the secure data store. In response to receipt of the request from the data processor operating in the non-secure mode, the further data processing device performs the task using data stored in the secure data store to access any secure data required.