Abstract:
A method and circuit for a data processing system (200) provide a processor-based partitioned priority blocking mechanism by storing interrupt identifiers, partition identifiers, thread identifiers, and priority levels associated with accepted interrupt requests in special purpose registers (35-38) located at the processor core (26) to enable quick and efficient interrupt priority blocking on a partition basis.
Abstract:
An integrated circuit device includes a processor core, and a controller. The processor core issues a command intended for a first thread of a plurality of threads. The controller initiates de-allocates hardware resources of the controller that are allocated to the first thread during a thread reset process for the first thread, returns a specified value to the processor core in response to the first command intended for the first thread during the thread reset process, drops responses intended for the first thread from other devices during the thread reset process, completes the thread reset process in response to a determination that all expected responses intended for the first thread have been either received or dropped, and continues to issue requests to other devices in response to commands from other threads of the plurality of threads and processing corresponding responses during the thread reset process.
Abstract:
In a data processing system having a plurality of resources and plurality of partitions, each partition including one or more resources of the plurality of resources, a method includes receiving an access request to a target resource of the plurality of resources; using a first set of transaction attributes of the access request to determine a partition identifier for the access request in which the partition identifier indicates a partition of the plurality of partitions which includes the target resource; using the partition identifier to determine access permissions for the partition indicated by the partition identifier; and based on the access permissions, determining whether or not the access request is permitted.
Abstract:
A method includes generating, by a first software process of the data processing system, a source partition descriptor for a DMA job which requires access to a first partition of a memory which is assigned to a second software process of the data processing system and not assigned to the first software process. The source partition descriptor comprises a partition identifier which identifies the first partition of the memory. The DMA unit receives the source partition descriptor and generates a destination partition descriptor for the DMA job. Generating the destination partition descriptor includes translating, by the DMA unit, the partition identifier to a buffer pool identifier which identifies a physical address within the first partition of the memory which is assigned to the second software process; and storing, by the DMA unit, the buffer pool identifier in the destination partition descriptor.
Abstract:
A method for controlling bandwidth in a direct memory access (DMA) unit of a computer processing system, the method comprising: assigning a DMA job to a selected DMA engine; starting a source timer; and issuing a request to read a next section of data for the DMA job. If a sufficient amount of the data was not obtained, allowing the DMA engine to wait until the source timer reaches a specified value before continuing to read additional data for the DMA job.
Abstract:
A data processing system employs an improved arbitration process in selecting pending memory access requests received from the one or more processor cores for servicing by the memory. The arbitration process uses memory timing and state information pertaining both to memory access requests already submitted to the memory for servicing and to the pending memory access requests which have not yet been selected for servicing by the memory. The memory timing and state information may be predicted memory timing and state information; that is, the component of the data processing system that implements the improved scheduling algorithm may not be able to determine the exact point in time at which a memory controller initiates a memory access for a corresponding memory access request and thus the component maintains information that estimates or otherwise predicts the particular state of the memory at any given time.
Abstract:
In a data processing system having a plurality of resources and plurality of partitions, each partition including one or more resources of the plurality of resources, a method includes receiving an access request to a target resource of the plurality of resources; using a first set of transaction attributes of the access request to determine a partition identifier for the access request in which the partition identifier indicates a partition of the plurality of partitions which includes the target resource; using the partition identifier to determine access permissions for the partition indicated by the partition identifier; and based on the access permissions, determining whether or not the access request is permitted.
Abstract:
A processor includes a scheduler operative to schedule data blocks for transmission from a plurality of queues or other transmission elements, utilizing at least a first table and a second table. The first table may comprise at least first and second first-in first-out (FIFO) lists of entries corresponding to transmission elements for which data blocks are to be scheduled in accordance with a first scheduling algorithm, such as a weighted fair queuing scheduling algorithm. The scheduler maintains a first table pointer identifying at least one of the first and second lists of the first table as having priority over the other of the first and second lists of the first table. The second table includes a plurality of entries corresponding to transmission elements for which data blocks are to be scheduled in accordance with a second scheduling algorithm, such as a constant bit rate or variable bit rate scheduling algorithm.
Abstract:
A processor includes scheduling circuitry for scheduling data blocks for transmission from multiple transmission elements, and traffic shaping circuitry coupled to the scheduling circuitry and operative to establish a traffic shaping requirement for the transmission of the data blocks from the transmission elements. The scheduling circuitry is configured for utilization of at least one time slot table which includes multiple locations, each corresponding to a transmission time slot. The scheduling circuitry is operative in conjunction with the time slot table to schedule the data blocks for transmission in a manner that substantially maintains the traffic shaping requirement established by the traffic shaping circuitry even in the presence of collisions between requests from the transmission elements for each of one or more of the time slots. In an illustrative embodiment, the scheduling circuitry utilizes a transmission element linking mechanism in conjunction with a set of pointers to accommodate multiple transmission elements which request the same time slot.
Abstract:
A processor includes scheduling circuitry for scheduling data blocks for transmission from a plurality of transmission elements. The scheduling circuitry has at least one time slot table accessible thereto, and is configured for utilization of the time slot table in scheduling the data blocks for transmission. The time slot table includes a plurality of locations, with each of the locations corresponding to a transmission time slot and being configurable for storing identifiers of at least two of the transmission elements. In an illustrative embodiment, a given one of the locations in the time slot table stores in a first portion thereof an identifier of a first one of the transmission elements that has requested transmission of a block of data in the corresponding time slot, and stores in a second portion thereof an identifier of a second one of the transmission elements that has requested transmission of a block of data in the corresponding time slot. Furthermore, additional transmission elements generating colliding requests for the given location can be linked between the first and second transmission elements using a linking mechanism. The use of multi-entry time slot table locations to accommodate collisions between transmission element requests considerably facilitates the maintenance of desired traffic shaping requirements.