摘要:
A method and apparatus is provided for prefetching in a data processing system (10). The data processing system (10) has a bus master (14) and a memory controller (16) coupled to a bus (12). A memory (18) is coupled to the memory controller (16). In the data processing system (14) an address is driven onto the bus (12). Before the address is qualified, data corresponding to the address is prefetched. Prefetching the data before the address is qualified allows prefetches to be accomplished sooner.
摘要:
A flexible peripheral access protection mechanism within a data processing system (10, 100). In one embodiment, each master (14, 15) within the data processing system (10) includes a corresponding privilege level modifier (70, 74) and corresponding trust attributes (71, 72, 75, 76) for particular bus access types (e.g. read and write accesses). Also, in one embodiment, each peripheral (22, 24) within the data processing system (10) includes a corresponding trust attribute (80, 84), write protect indicator (81, 85), and a privilege protect indicator (82, 86). Therefore, in one embodiment, a bus access by a bus master to a peripheral is allowed when the bus master has the appropriate privilege level and appropriate level of trust required by the peripheral (and the peripheral is not write protected, if the bus access is a write access). Also, through the use of the privilege level modifiers, a the bus master can be forced to a particular privilege level for a particular bus access.
摘要:
Task context information is transferred concurrently from a processor core to an accelerator and to a context memory. The accelerator performs an operation based on the task context information and the context memory saves the task context information. The order of transfer between the processor core is based upon a programmable indicator. During a context restore operation information is concurrently provided to data bus from both the accelerator and the processor core.
摘要:
A data processing system includes a processor core and ordering scope manager circuitry. The processor core sends an indication of a first ordering scope identifier for a current ordering scope a task currently being executed by the processor core and a second ordering scope identifier for a next-in-order ordering scope of the task. The ordering scope manager receives the indication the first and second ordering scope identifiers from processor core, and, provides a no task switch indicator to the processor core in response to determining that the first task is a first-in-transition-order task for the first ordering scope identifier and that processor core is authorized to execute the next-in-order ordering scope. The processor core transitions from executing in the current ordering scope to executing in the next-in-order ordering scope without performing task switch in response to the no task switch indicator being provided.
摘要:
Each task assigned to a core can be considered an “active” task. Sequential strobe signals of a watchdog signal can be spaced apart in time by a certain duration. The duration between strobe signals is longer than the expected duration of an active task. By knowing that all tasks being monitored are expected to execute within an expected amount of time, the duration between the strobe signals can be set to be longer than that expected amount of time. If a task has not transitioned to inactive by a next strobe, a watchdog error has occurred.
摘要:
A floating point value can represent a number or something that is not a number (NaN). A floating point value that is a NaN having data field that stores information, such as a propagation count that indicates the number of times a NaN value has been propagated through instructions. A NaN evaluation instruction can determine whether one or more operands is a NaN operand of a particular type, and if so can generate a result that is a NaN of a different type. An exception can be generated based upon the NaN of the different type being provided as a resultant.
摘要:
A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The hardware module stores a first ordering scope identifier at a first storage location of the ordering scope manager. The first ordering scope identifier indicates a first ordering scope that a first task is operating in. The ordering scope manager increments the first ordering scope identifier to create a new ordering scope identifier. In response to determining that the processor core is authorized to transition the first task from the first ordering scope to a second ordering scope associated with the new ordering scope identifier, the ordering scope manager provides hint information to the processor core. The processor core transitions from the first ordering scope to the second ordering scope without completing a task switch in response to the hint information.
摘要:
A method includes: decoding an instruction a first time to obtain a first decoded instruction; decoding the instruction a second time to obtain a second decoded instruction; comparing at least a portion of the first decoded instruction to at least a portion of the second decoded instruction; and when the at least a portion of the first decoded instruction matches the at least a portion of the second decoded instruction, executing the instruction.
摘要:
Threads may be scheduled to be executed by one or more cores depending upon whether it is more desirable to minimize power or to maximize performance. If minimum power is desired, threads may be schedule so that the active devices are most shared; this will minimize the number of active devices at the expense of performance. On the other hand, if maximum performance is desired, threads may be scheduled so that active devices are least shared. As a result, threads will have more active devices to themselves, resulting in greater performance at the expense of additional power consumption. Thread affinity with a core may also be taken into consideration when scheduling threads in order to improve the power consumption and/or performance of an apparatus.
摘要:
A method includes initializing a counter value of a hardware counter. The method further includes iteratively adjusting the counter value and storing an initialization value to a memory location using a memory address based on the counter value. The method also includes generating an interrupt request based on a comparison of the counter value to a waitpoint value concurrent with iteratively adjusting and storing. A memory device includes a memory array and an initialization module. The initialization module includes a counter, a register to store a waitpoint value, write logic configured to write an initialization value to a memory location of the memory array associated with a memory address that is based on a counter value of the counter, and interrupt logic configured to generate an interrupt request based on a comparison of the counter value of the counter to the waitpoint value.