Loading Software on a Plurality of Processors
    1.
    发明申请
    Loading Software on a Plurality of Processors 失效
    在多个处理器上加载软件

    公开(公告)号:US20080235679A1

    公开(公告)日:2008-09-25

    申请号:US12131348

    申请日:2008-06-02

    IPC分类号: G06F9/445

    CPC分类号: G06F9/44557 G06F9/44526

    摘要: Loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.

    摘要翻译: 在多个处理器上加载软件。 处理单元(PU)从系统存储器检索文件并将其加载到其内部存储器中。 PU从文件头中提取一种处理器类型,用于标识文件是否应在PU或协同处理单元(SPU)上执行。 如果SPU应该执行该文件,PU DMA将该文件提交给SPU执行。 在一个实施例中,该文件是包括PU和SPU代码的组合文件。 在该实施例中,PU识别包括在文件中的一个或多个区段标题,其指示组合文件内的嵌入式SPU代码。 在本实施例中,PU从组合文件中提取SPU代码,并将提取的代码DMA提取给SPU以执行。

    Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
    2.
    发明申请
    Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment 有权
    将处理器分组并将共享内存空间分配给异构计算机环境中的组

    公开(公告)号:US20080155203A1

    公开(公告)日:2008-06-26

    申请号:US12042254

    申请日:2008-03-04

    IPC分类号: G06F12/00

    CPC分类号: G06F9/5061 G06F2209/5012

    摘要: Grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.

    摘要翻译: 介绍了分组处理器。 处理单元(PU)启动应用程序并识别应用程序的要求。 PU以组的形式向应用分配一个或多个协同处理单元(SPU)和存储器空间。 应用程序指定任务是否需要共享内存或专用内存。 共享内存是可由SPU和PU访问的内存空间。 然而,专用内存是只能由组中包含的SPU访问的内存空间。 当应用程序执行时,组内的资源被分配给应用程序的执行线程。 每个组都有自己的组属性,如地址空间,策略(即实时,FIFO,运行完成等)和优先级(即低或高)。 在线程执行期间使用这些组属性来确定哪些组优先于其他任务。

    Light Weight Task Switching When a Shared Memory Condition is Signaled
    3.
    发明申请
    Light Weight Task Switching When a Shared Memory Condition is Signaled 失效
    当共享内存条件发出信号时,轻量级任务切换

    公开(公告)号:US20080163241A1

    公开(公告)日:2008-07-03

    申请号:US12049317

    申请日:2008-03-15

    IPC分类号: G06F9/46 G06F12/08 G06F11/30

    CPC分类号: G06F12/0842 G06F9/526

    摘要: An approach that uses a handler to detect asynchronous lock line reservation lost events, and switching tasks based upon whether a condition is true or a mutex lock is acquired is presented. A synergistic processing unit (SPU) invokes a first thread and, during execution, the first thread requests external data that is shared with other threads or processors in the system. This shared data may be protected with a mutex lock or other shared memory synchronization constructs. When requested data is not available, the SPU switches to a second thread and monitors lock line reservation lost events in order to check when the data is available. When the data is available, the SPU switches back to the first thread and processes the first thread's request.

    摘要翻译: 提出了一种使用处理程序来检测异步锁定线路保留丢失事件的方法,以及基于条件是真实还是获取互斥锁来切换任务。 协同处理单元(SPU)调用第一个线程,并且在执行期间,第一个线程请求与系统中的其他线程或处理器共享的外部数据。 该共享数据可以用互斥锁或其他共享内存同步结构来保护。 当请求的数据不可用时,SPU切换到第二个线程并监视锁定线路保留丢失事件,以便检查数据可用时间。 当数据可用时,SPU切换回第一个线程并处理第一个线程的请求。

    Asynchronous Linked Data Structure Traversal
    4.
    发明申请
    Asynchronous Linked Data Structure Traversal 有权
    异步链接数据结构遍历

    公开(公告)号:US20080263091A1

    公开(公告)日:2008-10-23

    申请号:US12147540

    申请日:2008-06-27

    IPC分类号: G06F17/00

    摘要: Asynchronously traversing a disjoint linked data structure is presented. A synergistic processing unit (SPU) includes a handler that works in conjunction with a memory flow controller (MFC) to traverse a disjoint linked data structure. The handler compares a search value with a node value, and provides the MFC with an effective address of the next node to traverse based upon the comparison. In turn, the MFC retrieves the corresponding node data from system memory and stores the node data in the SPU's local storage area. The MFC stalls processing and sends an asynchronous event interrupt to the SPU which, as a result, instructs the handler to retrieve and compare the latest node data in the local storage area with the search value. The traversal continues until the handler matches the search value with a node value or until the handler determines a failed search.

    摘要翻译: 呈现异步遍历不相交的数据结构。 协同处理单元(SPU)包括与存储器流控制器(MFC)一起工作的处理程序,以遍历不相交的数据结构。 处理程序将搜索值与节点值进行比较,并根据比较为MFC提供下一个节点的有效地址进行遍历。 反过来,MFC从系统存储器检索相应的节点数据,并将节点数据存储在SPU的本地存储区域中。 MFC停止处理并向SPU发送异步事件中断,结果指示处理程序检索和比较本地存储区域中的最新节点数据与搜索值。 遍历继续,直到处理程序与搜索值与节点值匹配,或者直到处理程序确定失败的搜索。

    System and method for light weight task switching when a shared memory condition is signaled
    5.
    发明申请
    System and method for light weight task switching when a shared memory condition is signaled 审中-公开
    当共享内存条件发出信号时,用于轻量级任务切换的系统和方法

    公开(公告)号:US20070043916A1

    公开(公告)日:2007-02-22

    申请号:US11204424

    申请日:2005-08-16

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0842 G06F9/526

    摘要: A system and method for using a handler to detect asynchronous lock line reservation lost events, and switching tasks based upon whether a condition is true or a mutex lock is acquired is presented. A synergistic processing unit (SPU) invokes a first thread and, during execution, the first thread requests external data that is shared with other threads or processors in the system. This shared data may be protected with a mutex lock or other shared memory synchronization constructs. When requested data is not available, the SPU switches to a second thread and monitors lock line reservation lost events in order to check when the data is available. When the data is available, the SPU switches back to the first thread and processes the first thread's request.

    摘要翻译: 提出了一种使用处理程序来检测异步锁定线路保留丢失事件的系统和方法,以及基于条件为真还是获取互斥锁来切换任务。 协同处理单元(SPU)调用第一个线程,并且在执行期间,第一个线程请求与系统中的其他线程或处理器共享的外部数据。 该共享数据可以用互斥锁或其他共享内存同步结构来保护。 当请求的数据不可用时,SPU切换到第二个线程并监视锁定线路保留丢失事件,以便检查数据可用时间。 当数据可用时,SPU切换回第一个线程并处理第一个线程的请求。

    Processor Dedicated Code Handling in a Multi-Processor Environment
    6.
    发明申请
    Processor Dedicated Code Handling in a Multi-Processor Environment 有权
    处理器专用代码处理在多处理器环境中

    公开(公告)号:US20080276232A1

    公开(公告)日:2008-11-06

    申请号:US12173093

    申请日:2008-07-15

    IPC分类号: G06F9/45

    CPC分类号: G06F9/5044 G06F2209/509

    摘要: Code handling, such as interpreting language instructions or performing “just-in-time” compilation, is performed using a heterogeneous processing environment that shares a common memory. In a heterogeneous processing environment that includes a plurality of processors, one of the processors is programmed to perform a dedicated code-handling task, such as perform just-in-time compilation or interpretation of interpreted language instructions, such as Java. The other processors request code handling processing that is performed by the dedicated processor. Speed is achieved using a shared memory map so that the dedicated processor can quickly retrieve data provided by one of the other processors.

    摘要翻译: 使用共享公共存储器的异构处理环境来执行诸如解释语言指令或执行“即时”编译的代码处理。 在包括多个处理器的异构处理环境中,处理器之一被编程为执行专用代码处理任务,例如执行诸如Java的解释性语言指令的即时编译或解释。 其他处理器请求由专用处理器执行的代码处理处理。 使用共享存储器映射实现速度,使得专用处理器可以快速检索由其他处理器之一提供的数据。

    System and method for grouping processors
    8.
    发明申请
    System and method for grouping processors 有权
    用于分组处理器的系统和方法

    公开(公告)号:US20050081201A1

    公开(公告)日:2005-04-14

    申请号:US10670833

    申请日:2003-09-25

    IPC分类号: G06F9/46 G06F9/50

    CPC分类号: G06F9/5061 G06F2209/5012

    摘要: A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.

    摘要翻译: 提出了一种用于分组处理器的系统和方法。 处理单元(PU)启动应用程序并识别应用程序的要求。 PU以组的形式向应用分配一个或多个协同处理单元(SPU)和存储器空间。 应用程序指定任务是否需要共享内存或专用内存。 共享内存是可由SPU和PU访问的内存空间。 然而,专用内存是只能由组中包含的SPU访问的内存空间。 当应用程序执行时,组内的资源被分配给应用程序的执行线程。 每个组都有自己的组属性,如地址空间,策略(即实时,FIFO,运行完成等)和优先级(即低或高)。 在线程执行期间使用这些组属性来确定哪些组优先于其他任务。

    System and method for loading software on a plurality of processors
    9.
    发明申请
    System and method for loading software on a plurality of processors 失效
    用于在多个处理器上加载软件的系统和方法

    公开(公告)号:US20050086655A1

    公开(公告)日:2005-04-21

    申请号:US10670842

    申请日:2003-09-25

    CPC分类号: G06F9/44557 G06F9/44526

    摘要: A system and method for loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.

    摘要翻译: 提出了一种用于在多个处理器上加载软件的系统和方法。 处理单元(PU)从系统存储器检索文件并将其加载到其内部存储器中。 PU从文件头中提取一种处理器类型,用于标识文件是否应在PU或协同处理单元(SPU)上执行。 如果SPU应该执行该文件,PU DMA将该文件提交给SPU执行。 在一个实施例中,该文件是包括PU和SPU代码的组合文件。 在该实施例中,PU识别包括在文件中的一个或多个区段标题,其指示组合文件内的嵌入式SPU代码。 在本实施例中,PU从组合文件中提取SPU代码,并将提取的代码DMA提取给SPU以执行。

    System and method for virtualization of processor resources
    10.
    发明申请
    System and method for virtualization of processor resources 有权
    处理器资源虚拟化的系统和方法

    公开(公告)号:US20060069878A1

    公开(公告)日:2006-03-30

    申请号:US10955093

    申请日:2004-09-30

    IPC分类号: G06F12/08 G06F12/00

    摘要: A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.

    摘要翻译: 提出了一种用于处理器资源虚拟化的系统和方法。 在处理器上创建线程,并将处理器的本地内存映射到有效的地址空间。 这样做,处理器的本地内存可以由其他处理器访问,无论处理器是否正在运行。 附加线程会在有效地址空间中创建额外的本地内存映射。 有效地址空间对应于物理本地存储器或“软”复制区域。 当处理器运行时,不同的处理器可以从处理器的本地存储区域访问位于第一处理器的本地存储器中的数据。 当处理器未运行时,处理器的本地存储器的软拷贝存储在其他处理器的存储器位置(即锁定的高速缓冲存储器,固定的系统存储器,虚拟存储器等)中以继续访问。