System and method for compiling source code for multi-processor environments
    1.
    发明申请
    System and method for compiling source code for multi-processor environments 审中-公开
    用于编译多处理器环境的源代码的系统和方法

    公开(公告)号:US20050071828A1

    公开(公告)日:2005-03-31

    申请号:US10671056

    申请日:2003-09-25

    IPC分类号: G06F9/45

    CPC分类号: G06F9/44547 G06F8/447

    摘要: A system and method for compiling source code for multi-processor environments is presented. Source code is compiled which creates an object file whereby the object file includes multiple object code subtasks. Source code subtasks are compiled into object code subtasks using one of three approaches which are 1) a lowbrow approach, 2) a brute force approach, and 3) a program directive approach. Each object code subtask is formatted to run on a particular processor type with a particular architecture, such as a microprocessor-based architecture or a digital signal processor-based architecture. During runtime, each object code is loaded onto its corresponding processor type for execution.

    摘要翻译: 介绍了一种用于编译多处理器环境的源代码的系统和方法。 源代码被编译,其创建目标文件,由此目标文件包括多个目标代码子任务。 使用三种方法之一将源代码子任务编译为目标代码子任务,这三种方法之一是低级方法,2)强力方法,以及3)程序指令方法。 每个目标代码子任务被格式化为在具有特定架构的特定处理器类型上运行,例如基于微处理器的架构或基于数字信号处理器的架构。 在运行时,每个目标代码被加载到其相应的处理器类型上以供执行。

    System and method for task queue management of virtual devices using a plurality of processors
    2.
    发明申请
    System and method for task queue management of virtual devices using a plurality of processors 失效
    使用多个处理器的虚拟设备的任务队列管理的系统和方法

    公开(公告)号:US20050081202A1

    公开(公告)日:2005-04-14

    申请号:US10670838

    申请日:2003-09-25

    IPC分类号: G06F9/46

    CPC分类号: G06F9/505

    摘要: A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.

    摘要翻译: 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。

    System and method for balancing computational load across a plurality of processors
    3.
    发明申请
    System and method for balancing computational load across a plurality of processors 失效
    用于平衡多个处理器上的计算负载的系统和方法

    公开(公告)号:US20050081182A1

    公开(公告)日:2005-04-14

    申请号:US10670826

    申请日:2003-09-25

    IPC分类号: G06F9/44 G06F9/50

    CPC分类号: G06F9/5044

    摘要: A system and method for balancing computational load across a plurality of processors. Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.

    摘要翻译: 一种用于平衡多个处理器上的计算负载的系统和方法。 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。

    EFFICIENT TRIANGULAR SHAPED MESHES
    4.
    发明申请
    EFFICIENT TRIANGULAR SHAPED MESHES 审中-公开
    高效三角形网格

    公开(公告)号:US20070188487A1

    公开(公告)日:2007-08-16

    申请号:US11548242

    申请日:2006-10-10

    IPC分类号: G06T15/00

    CPC分类号: G06T17/20 G06T15/00

    摘要: The present invention renders a triangular mesh for employment in graphical displays. The triangular mesh comprises triangle-shaped graphics primitives. The triangle-shaped graphics primitives represent a subdivided triangular shape. Each triangle-shaped graphics primitive shares defined vertices with adjoining triangle-shaped graphics primitives. These shared vertices are transmitted and employed for the rendering of the triangle-shaped graphics primitives.

    摘要翻译: 本发明使图形显示器中使用三角形网格。 三角形网格包含三角形图形图元。 三角形图形图元表示细分的三角形形状。 每个三角形图形基元与相邻的三角形图形基元共享定义的顶点。 这些共享顶点被传送并用于渲染三角形图形基元。

    System and method for partitioning processor resources based on memory usage
    5.
    发明申请
    System and method for partitioning processor resources based on memory usage 失效
    基于内存使用分配处理器资源的系统和方法

    公开(公告)号:US20060095901A1

    公开(公告)日:2006-05-04

    申请号:US11050020

    申请日:2005-02-03

    IPC分类号: G06F9/45

    摘要: A system and method for partitioning processor resources based on memory usage is provided. A compiler determines the extent to which a process is memory-bound and accordingly divides the process into a number of threads. When a first thread encounters a prolonged instruction, the compiler inserts a conditional branch to a second thread. When the second thread encounters a prolonged instruction, a conditional branch to a third thread is executed. This continues until the last thread conditionally branches back to the first thread. An indirect segmented register file is used so that the “return to” and “branch to” logical registers within each thread are the same (e.g., R1 and R2) for each thread. These logical registers are mapped to hardware registers that store actual addresses. The indirect mapping is altered to bypass completed threads. When the last thread completes it may signal an external process.

    摘要翻译: 提供了一种基于内存使用来分配处理器资源的系统和方法。 编译器确定进程是内存限制的程度,从而将进程划分为多个线程。 当第一个线程遇到延长的指令时,编译器将条件分支插入第二个线程。 当第二个线程遇到延长的指令时,执行到第三个线程的条件分支。 这将持续到最后一个线程有条件地分支回到第一个线程。 使用间接分段寄存器文件,使得每个线程内的“返回”和“分支到”逻辑寄存器对于每个线程是相同的(例如,R 1和R 2)。 这些逻辑寄存器映射到存储实际地址的硬件寄存器。 间接映射被更改为绕过完成的线程。 当最后一个线程完成时,它可能会发出外部进程信号。

    System and method for hiding memory latency
    6.
    发明申请
    System and method for hiding memory latency 审中-公开
    隐藏内存延迟的系统和方法

    公开(公告)号:US20060080661A1

    公开(公告)日:2006-04-13

    申请号:US10960609

    申请日:2004-10-07

    IPC分类号: G06F9/46

    CPC分类号: G06F9/322 G06F8/41 G06F9/3851

    摘要: A System and method for hiding memory latency in a multi-thread environment is presented. Branch Indirect and Set Link (BISL) and/or Branch Indirect and Set Link if External Data (BISLED) instructions are placed in thread code during compilation at instances that correspond to a prolonged instruction. A prolonged instruction is an instruction that instigates latency in a computer system, such as a DMA instruction. When a first thread encounters a BISL or a BISLED instruction, the first thread passes control to a second thread while the first thread's prolonged instruction executes. In turn, the computer system masks the latency of the first thread's prolonged instruction. The system can be optimized based on the memory latency by creating more threads and further dividing a register pool amongst the threads to further hide memory latency in operations that are highly memory bound.

    摘要翻译: 提出了一种在多线程环境中隐藏内存延迟的系统和方法。 分支间接和设置链接(BISL)和/或分支间接和设置链接,如果外部数据(BISLED)指令在对应于延长的指令的实例的编译期间被放置在线程代码中。 延长的指令是指示计算机系统中的延迟,例如DMA指令。 当第一个线程遇到BISL或BISLED指令时,第一个线程在第一个线程的延长指令执行时将控制传递给第二个线程。 反过来,计算机系统掩盖了第一个线程延长的指令的延迟。 可以通过创建更多线程并在线程之间进一步划分寄存器池来进一步隐藏高度内存限制的操作中的内存延迟,从而基于内存延迟来优化系统。

    Apparatus and method for efficient communication of producer/consumer buffer status
    7.
    发明申请
    Apparatus and method for efficient communication of producer/consumer buffer status 审中-公开
    用于生产者/消费者缓冲状态的高效通信的装置和方法

    公开(公告)号:US20070174411A1

    公开(公告)日:2007-07-26

    申请号:US11340453

    申请日:2006-01-26

    IPC分类号: G06F15/167

    CPC分类号: G06F15/17337

    摘要: An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.

    摘要翻译: 提供了用于生产者/消费者缓冲器状态的有效通信的装置和方法。 利用该设备和方法,当设备使用设备的信号通知通道在共享缓冲区域上执行操作时,数据处理系统中的设备通知彼此对共享缓冲区域的头和尾指针的更新。 因此,当向共享缓冲区域产生数据的生成器设备将数据写入到共享缓冲区域时,对头指针的更新被写入消费者设备的信号通知通道。 当消费者设备从共享缓冲区域读取数据时,消费者设备将尾指针更新写入生成器设备的信号通知通道。 此外,信道可以以阻塞模式操作,使得对应的设备保持在低功率状态,直到通过信道接收到更新。

    System and method for dynamically partitioning processing across plurality of heterogeneous processors
    8.
    发明申请
    System and method for dynamically partitioning processing across plurality of heterogeneous processors 失效
    用于在多个异构处理器之间动态分割处理的系统和方法

    公开(公告)号:US20050081181A1

    公开(公告)日:2005-04-14

    申请号:US10670824

    申请日:2003-09-25

    摘要: A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor.

    摘要翻译: 一个程序进入至少两个对象文件:一个对象文件,用于每个受支持的处理器环境。 在编译过程中,将数据位置,计算强度和数据并行等代码特征分析并记录在目标文件中。 在运行时间期间,代码特征与运行时考虑相结合,例如处理器上的当前负载和正在处理的数据的大小,以达到总体值。 然后,总体值用于确定哪些处理器将被分配任务。 这些值基于各种处理器的特性分配。 例如,如果一个处理器更好地处理针对大量数据流的密集计算,则高度计算密集的程序和处理大量数据的程序对该处理器进行加权。 然后在分配的处理器上加载和执行相应的对象。

    System and method for virtual devices using a plurality of processors
    9.
    发明申请
    System and method for virtual devices using a plurality of processors 失效
    使用多个处理器的虚拟设备的系统和方法

    公开(公告)号:US20050071526A1

    公开(公告)日:2005-03-31

    申请号:US10670835

    申请日:2003-09-25

    CPC分类号: G06F9/4843 G06F9/544

    摘要: A system and method is provided to allow virtual devices that use a plurality of processors in a multiprocessor systems, such as the BE environment. Using this method, a synergistic processing unit (SPU) can either be dedicated to performing a particular function (i.e., audio, video, etc.) or a single SPU can be programmed to perform several functions on behalf of the other processors in the system. The application, preferably running in one of the primary (PU) processors, issues IOCTL commands through device drivers that correspond to SPUs. The kernel managing the primary processors responds by sending an appropriate message to the SPU that is performing the dedicated function. Using this method, an SPU can be virtualized for swapping multiple tasks or dedicated to performing a particular task.

    摘要翻译: 提供了一种系统和方法,以允许在诸如BE环境的多处理器系统中使用多个处理器的虚拟设备。 使用这种方法,协同处理单元(SPU)可以专用于执行特定功能(即,音频,视频等),或者单个SPU可被编程为代表系统中的其他处理器执行若干功能 。 优选地,在主(PU)处理器之一中运行的应用通过对应于SPU的设备驱动器发出IOCTL命令。 管理主处理器的内核通过向执行专用功能的SPU发送适当的消息来做出响应。 使用此方法,可以将SPU虚拟化用于交换多个任务或专用于执行特定任务。

    System and method for ray tracing with depth buffered display
    10.
    发明申请
    System and method for ray tracing with depth buffered display 失效
    用于具有深度缓冲显示的光线跟踪的系统和方法

    公开(公告)号:US20070035544A1

    公开(公告)日:2007-02-15

    申请号:US11201651

    申请日:2005-08-11

    IPC分类号: G06T15/40

    摘要: A system and method for generating an image that includes ray traced pixel data and rasterized pixel data is presented. A synergistic processing unit (SPU) uses a rendering algorithm to generate ray traced data for objects that require high-quality image rendering. The ray traced data is fragmented, whereby each fragment includes a ray traced pixel depth value and a ray traced pixel color value. A rasterizer compares ray traced pixel depth values to corresponding rasterized pixel depth values, and overwrites ray traced pixel data with rasterized pixel data when the corresponding rasterized fragment is “closer” to a viewing point, which results in composite data. A display subsystem uses the resultant composite data to generate an image on a user's display.

    摘要翻译: 提出了一种用于生成包括光线跟踪像素数据和光栅化像素数据的图像的系统和方法。 协同处理单元(SPU)使用渲染算法为需要高质量图像渲染的对象生成光线跟踪数据。 光线跟踪的数据被分段,由此每个片段包括光线跟踪的像素深度值和光线跟踪的像素颜色值。 光栅化器将光线跟踪的像素深度值与相应的光栅化像素深度值进行比较,并且当对应的光栅化片段“靠近”到观察点时,将光栅跟踪的像素数据重写为光栅跟踪像素数据,这导致复合数据。 显示子系统使用所得到的复合数据在用户的显示器上生成图像。