摘要:
A design structure for a livelock resolution circuit is provided. When a bus unit detects a timeout condition, or potential timeout condition, the bus unit activates a livelock resolution request signal. A livelock resolution unit receives livelock resolution requests from the bus units and signals an attention to a control processor. The control processor performs actions to attempt to resolve the livelock condition. Once a bus unit that issued a livelock resolution request has managed to successfully issue its command, it deactivates its livelock resolution request. If all livelock resolution request signals are deactivated, then the control processor instructs the bus and all bus units to resume normal activity. On the other hand, if the control processor determines that a predetermined amount of time passes without any progress being made, it determines that a hang condition has occurred.
摘要:
Mechanisms for performing decoding of context-adaptive binary arithmetic coding (CABAC) encoded data. The mechanisms receive, in a first single instruction multiple data (SIMD) vector register of the data processing system, CABAC encoded data of a bit stream. The CABAC encoded data comprises a value to be decoded and bit stream state information. The mechanisms receive, in a second SIMD vector register of the data processing system, CABAC decoder context information. The mechanisms process the value, the bit stream state information, and the CABAC decoder context information in a non-recursive manner to generate a decoded value, updated bit stream state information, and updated CABAC decoder context information. The mechanisms store, in a third SIMD vector register, a result vector that combines the decoded value, updated bit stream state information, and updated CABAC decoder context information. The mechanisms use the decoded value to generate a video output on the data processing system.
摘要:
A mechanism is provided for configuring offline player behavior within a persistent world game. A player agent for an offline player includes an event monitor that monitors for events that occur in a persistent virtual world maintained by a game server. When a game event occurs that triggers an offline player rule, the player agent may generate game events on behalf of the offline player. The player agent may also receive messages from an offline player. The messages may include commands for adding, removing, or editing offline player rules. A message may also include a command to view a list of rules or fire a one-time execution of a rule upon receipt. Therefore, a player may contribute to the persistent virtual world even when offline by sending commands using a messaging client or Web browser.
摘要:
A system for limiting the size of a local storage of a processor is provided. A facility is provided in association with a processor for setting a local storage size limit. This facility is a privileged facility and can only be accessed by the operating system running on a control processor in the multiprocessor system or the associated processor itself. The operating system sets the value stored in the local storage limit register when the operating system initializes a context switch in the processor. When the processor accesses the local storage using a request address, the local storage address corresponding to the request address is compared against the local storage limit size value in order to determine if the local storage address, or a modulo of the local storage address, is used to access the local storage.
摘要:
A buffer, a method, and a computer program product for DMA transfers are provided that are designed to save memory space within a local memory of a processor. The buffer is a return buffer with a portion reserved for DMA lists. A DMA controller accomplishes DMA transfers by: reading address elements from a DMA list located in the DMA list portion; reading the corresponding data from system memory; and copying the corresponding data to the return buffer portion. This buffer saves space because when the buffer begins to fill up the corresponding return data can overwrite the data in the DMA list. Accordingly, the DMA list overlays on top of the return buffer, such that the return data can destruct the DMA list and the extra storage space for the DMA list is saved.
摘要:
A method for communicating with a processor event facility is provided. The method makes use of a channel interface as the primary mechanism for communicating with the processor event facility. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power “stall” state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state.
摘要:
A mechanism for communicating instructions and data between a processor and external devices are provided. The mechanism makes use of a channel interface as the primary mechanism for communicating between the processor and a memory flow controller. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power “stall” state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state.
摘要:
A method and system for addressing memory of an information handling system in which the memory comprises a plurality of memory banks, each of which can support a plurality of different predetermined size memory modules. The sizes of the different modules are multiples of the module having the smallest size. In the embodiment described, two different sizes are employed, a 256K capacity module and a 1 Meg. capacity module, either of which can be installed in 1 of 4 memory banks. The maximum addressable address range is therefore 4 Meg. while the minimum memory is 256K. The address range can be increased in increments of 256K corresponding to 1 segment to a total of 16 contiguous segments or 4 Meg. A memory address bus comprising 22 lines is employed in the system. The 20 low order lines address each bank simultaneously. A converter converts the 4 high order address bits 22-19 to 16 sequentially ordered segment lines. A matrix of similar logic cells consisting of combinatorial logic processes each segment line to develop memory bank select signals in accordance with size signals obtained from the modules and supplied to the cells in the first row of the matrix which then provide modified size signals to remaining cells in the respective columns of the matrix. Contiguous address segments are provided from the minimum to the maximum range for every possible combination of memory modules installable in the four banks.
摘要:
A design structure for a circuit function that implements a load when reservation lost instruction for performing cacheline polling is disclosed. Initially, a first process requests an action to be performed by a second process. The request is made via a store operation to a cacheable memory location. The first process then reads the cacheable memory location via a conditional load operation to determine whether or not the requested action has been completed by the second process, and the first process sets a reservation at the cacheable memory location if the requested action has not been completed by the second process. The conditional load operation of the first process is stalled until the reservation at the cacheable memory location has been lost. After the requested action has been completed, the reservation in the cacheable memory location is reset by the second process.
摘要:
A mechanism programming a direct memory access engine operating as a multithreaded processor is provided. A plurality of programs is received from a host processor in a local memory associated with the direct memory access engine. A request is received in the direct memory access engine from the host processor indicating that the plurality of programs located in the local memory is to be executed. The direct memory access engine executes two or more of the plurality of programs without intervention by a host processor. As each of the two or more of the plurality of programs completes execution, the direct memory access engine sends a completion notification to the host processor that indicates that the program has completed execution.