摘要:
A design structure for performing cacheline polling utilizing a store and reserve instruction are disclosed. In accordance with one embodiment of the present invention, a first process initially requests an action to be performed by a second process. A reservation is set at a cacheable memory location via a store operation. The first process reads the cacheable memory location via a load operation to determine whether or not the requested action has been completed by the second process. The load operation of the first process is stalled until the reservation on the cacheable memory location is lost. After the requested action has been completed, the reservation in the cacheable memory location is reset by the second process.
摘要:
A system comprises a memory module configured to store signed page table data and a selected processing element coupled to the memory module. The selected processing element is one of a plurality of processing elements, which together comprise a portion of a multiprocessor system. The selected processing element is configured to authenticate page table management code and, based on authenticated page table management code, to sign page table data that is subsequently stored in the memory module, and to verify signed page table data that is read from the memory module.
摘要:
A design structure for performing cacheline polling utilizing store and reserve and load when reservation lost instructions is disclosed. In one embodiment a method is provided which comprises storing a buffer flag busy indicator data value within a first cacheable memory location and setting a load/store operation reservation on said first cacheable memory location via a store and reserve instruction. In the described embodiment, a data value stored within the first cacheable memory location is accessed via a conditional load instruction in response to a determination that the load/store operation reservation on the first cacheable memory location has been reset. Conversely, execution of the conditional load instruction is stalled in response to a determination that the load/store operation reservation on the first cacheable memory location has not been reset.
摘要:
A system, method, and computer-usable medium for an isolated process to control address translation. According to a preferred embodiment of the present invention, an isolation region that is accessible only to a first processing unit in a data processing system is created. A loader is executed to load a secure process in the isolation region. If the secure process is determined to be allowed to issue real mode direct memory access commands, real mode direct memory access commands are enabled to allow the secure process to issue non-translated direct memory access commands.
摘要:
A system for communicating command parameters between a processor and a memory flow controller is provided. The system makes use of a channel interface as the primary mechanism for communicating between the processor and a memory flow controller. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power “stall” state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state.
摘要:
Mechanisms for extracting data dependencies during runtime are provided. The mechanisms execute a portion of code having a loop and generate, for the loop, a first parallel execution group comprising a subset of iterations of the loop less than a total number of iterations of the loop. The mechanisms further execute the first parallel execution group and determining, for each iteration in the subset of iterations, whether the iteration has a data dependence. Moreover, the mechanisms commit store data to system memory only for stores performed by iterations in the subset of iterations for which no data dependence is determined. Store data of stores performed by iterations in the subset of iterations for which a data dependence is determined is not committed to the system memory.
摘要:
A mechanism programming a direct memory access engine operating as a multithreaded processor is provided. A plurality of programs is received from a host processor in a local memory associated with the direct memory access engine. A request is received in the direct memory access engine from the host processor indicating that the plurality of programs located in the local memory is to be executed. The direct memory access engine executes two or more of the plurality of programs without intervention by a host processor. As each of the two or more of the plurality of programs completes execution, the direct memory access engine sends a completion notification to the host processor that indicates that the program has completed execution.
摘要:
A mechanism is provided for efficiently managing the operation of a translation buffer. The mechanism is utilized to pre-load a translation buffer to prevent poor operation as a result of slow warming of a cache. A software pre-load mechanism may be provided for preloading a translation look aside buffer (TLB) via a hardware implemented controller. Following preloading of the TLB, control of accessing the TLB may be handed over to the hardware implemented controller. Upon an application context switch operation, the software preload mechanism may be utilized again to preload the TLB with new translation information for the newly active application instance.
摘要:
A method, system, apparatus, and article of manufacture for performing cacheline polling utilizing store and reserve and load when reservation lost instructions is disclosed. In one embodiment a method is provided which comprises storing a buffer flag busy indicator data value within a first cacheable memory location and setting a load/store operation reservation on said first cacheable memory location via a store and reserve instruction. In the described embodiment, a data value stored within the first cacheable memory location is accessed via a conditional load instruction in response to a determination that the load/store operation reservation on the first cacheable memory location has been reset. Conversely, execution of the conditional load instruction is stalled in response to a determination that the load/store operation reservation on the first cacheable memory location has not been reset.
摘要:
An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.