摘要:
Grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
摘要:
A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.
摘要:
A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
摘要:
A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.
摘要:
Loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.
摘要:
Grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.
摘要:
Loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.
摘要:
A mechanism is provided for automatic use of large pages. An operating system loader performs aggressive contiguous allocation followed by demand paging of small pages into a best-effort contiguous and naturally aligned physical address range sized for a large page. The operating system detects when the large page is fully populated and switches the mapping to use large pages. If the operating system runs low on memory, the operating system can free portions and degrade gracefully.
摘要:
A mechanism enabling a processor in a multiprocessor complex to function as a coprocessor to execute a specific function. The method includes a mechanism for activating a coprocessor to function as a coprocessor as well as a mechanism to execute a coprocessor request on the system. The present invention also provides a mechanism for efficient processor to processor communication for processors coupled to a common bus. Overall system performance is enhanced by significantly reducing the use of hardware interrupts for processor to processor communication.
摘要:
A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.