Abstract:
A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
Abstract:
A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.
Abstract:
A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
Abstract:
The page tables in existing art are modified to allow virtual address resolution by mapping to multiple overlapping entries, and resolving a physical address from the most specific entry. This enables more efficient use of system resources by allowing smaller frames to shadow larger frames. A page table is selected. When a virtual address in a request corresponds to an entry in the page table, which identifies a next page table associated with the large frame, a determination is made that the virtual address corresponds to an entry in the next page table, the entry in the next page table referencing a small frame overlay for the large frame. The virtual address is mapped to a physical address in the small frame overlay using data of the entry in the next page table. The physical address in a process-specific view of the large frame is returned.
Abstract:
A technique for indicating a safe shared resource condition with respect to a disabled thread provides a mechanism for providing a fast indication to other hardware threads that a temporarily disabled thread can no longer impact shared resources, such as shared special-purpose registers and translation look-aside buffers within the processor core. Signals from pipelines within the core indicates whether any of the instructions pending in the pipeline impact the shared resources and if not, then the thread disable status is presented to the other threads via a state change in a thread status register. Upon receiving an indication that a particular hardware thread is to be disabled, control logic halts the dispatch of instructions for the particular hardware thread, and then waits until any indication that a shared resource is impacted by an instruction has cleared. Then the control logic updates the thread status to indicate the thread is disabled.
Abstract:
An attribute of a descriptor associated with a task informs a runtime environment of which instructions a processor is to run to schedule a plurality of resources for completion of the task in accordance with a level of quality of service in a service level agreement.
Abstract:
Loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.
Abstract:
A method and system for memory address translation and pinning are provided. The method includes attaching a memory address space identifier to a direct memory access (DMA) request, the DMA request is sent by a consumer and using a virtual address in a given address space. The method further includes looking up for the memory address space identifier to find a translation of the virtual address in the given address space used in the DMA request to a physical page frame. Provided that the physical page frame is found, pinning the physical page frame as long as the DMA request is in progress to prevent an unmapping operation of said virtual address in said given address space, and completing the DMA request, wherein the steps of attaching, looking up and pinning are centrally controlled by a host gateway.
Abstract:
A method for retrieving information from a storage unit, the method includes: receiving, by an input output memory management unit second-level translation information representative of a partition of a storage unit address space; receiving, by a input output memory management unit, a direct memory access request that comprises a consumer identifier and a second memory address that was first-level translated by a communication circuit translation entity; performing, by the input output memory management unit, a second-level translation of the second memory address such as to provide a third memory address, in response to the identity of the consumer; and accessing the storage unit using the third memory address.
Abstract:
Grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.