EFFICIENT TRIANGULAR SHAPED MESHES
    1.
    发明申请
    EFFICIENT TRIANGULAR SHAPED MESHES 审中-公开
    高效三角形网格

    公开(公告)号:US20070188487A1

    公开(公告)日:2007-08-16

    申请号:US11548242

    申请日:2006-10-10

    IPC分类号: G06T15/00

    CPC分类号: G06T17/20 G06T15/00

    摘要: The present invention renders a triangular mesh for employment in graphical displays. The triangular mesh comprises triangle-shaped graphics primitives. The triangle-shaped graphics primitives represent a subdivided triangular shape. Each triangle-shaped graphics primitive shares defined vertices with adjoining triangle-shaped graphics primitives. These shared vertices are transmitted and employed for the rendering of the triangle-shaped graphics primitives.

    摘要翻译: 本发明使图形显示器中使用三角形网格。 三角形网格包含三角形图形图元。 三角形图形图元表示细分的三角形形状。 每个三角形图形基元与相邻的三角形图形基元共享定义的顶点。 这些共享顶点被传送并用于渲染三角形图形基元。

    Apparatus and method for efficient communication of producer/consumer buffer status
    2.
    发明申请
    Apparatus and method for efficient communication of producer/consumer buffer status 审中-公开
    用于生产者/消费者缓冲状态的高效通信的装置和方法

    公开(公告)号:US20070174411A1

    公开(公告)日:2007-07-26

    申请号:US11340453

    申请日:2006-01-26

    IPC分类号: G06F15/167

    CPC分类号: G06F15/17337

    摘要: An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.

    摘要翻译: 提供了用于生产者/消费者缓冲器状态的有效通信的装置和方法。 利用该设备和方法,当设备使用设备的信号通知通道在共享缓冲区域上执行操作时,数据处理系统中的设备通知彼此对共享缓冲区域的头和尾指针的更新。 因此,当向共享缓冲区域产生数据的生成器设备将数据写入到共享缓冲区域时,对头指针的更新被写入消费者设备的信号通知通道。 当消费者设备从共享缓冲区域读取数据时,消费者设备将尾指针更新写入生成器设备的信号通知通道。 此外,信道可以以阻塞模式操作,使得对应的设备保持在低功率状态,直到通过信道接收到更新。

    System and method for partitioning processor resources based on memory usage
    3.
    发明申请
    System and method for partitioning processor resources based on memory usage 失效
    基于内存使用分配处理器资源的系统和方法

    公开(公告)号:US20060095901A1

    公开(公告)日:2006-05-04

    申请号:US11050020

    申请日:2005-02-03

    IPC分类号: G06F9/45

    摘要: A system and method for partitioning processor resources based on memory usage is provided. A compiler determines the extent to which a process is memory-bound and accordingly divides the process into a number of threads. When a first thread encounters a prolonged instruction, the compiler inserts a conditional branch to a second thread. When the second thread encounters a prolonged instruction, a conditional branch to a third thread is executed. This continues until the last thread conditionally branches back to the first thread. An indirect segmented register file is used so that the “return to” and “branch to” logical registers within each thread are the same (e.g., R1 and R2) for each thread. These logical registers are mapped to hardware registers that store actual addresses. The indirect mapping is altered to bypass completed threads. When the last thread completes it may signal an external process.

    摘要翻译: 提供了一种基于内存使用来分配处理器资源的系统和方法。 编译器确定进程是内存限制的程度,从而将进程划分为多个线程。 当第一个线程遇到延长的指令时,编译器将条件分支插入第二个线程。 当第二个线程遇到延长的指令时,执行到第三个线程的条件分支。 这将持续到最后一个线程有条件地分支回到第一个线程。 使用间接分段寄存器文件,使得每个线程内的“返回”和“分支到”逻辑寄存器对于每个线程是相同的(例如,R 1和R 2)。 这些逻辑寄存器映射到存储实际地址的硬件寄存器。 间接映射被更改为绕过完成的线程。 当最后一个线程完成时,它可能会发出外部进程信号。

    System and method for hiding memory latency
    4.
    发明申请
    System and method for hiding memory latency 审中-公开
    隐藏内存延迟的系统和方法

    公开(公告)号:US20060080661A1

    公开(公告)日:2006-04-13

    申请号:US10960609

    申请日:2004-10-07

    IPC分类号: G06F9/46

    CPC分类号: G06F9/322 G06F8/41 G06F9/3851

    摘要: A System and method for hiding memory latency in a multi-thread environment is presented. Branch Indirect and Set Link (BISL) and/or Branch Indirect and Set Link if External Data (BISLED) instructions are placed in thread code during compilation at instances that correspond to a prolonged instruction. A prolonged instruction is an instruction that instigates latency in a computer system, such as a DMA instruction. When a first thread encounters a BISL or a BISLED instruction, the first thread passes control to a second thread while the first thread's prolonged instruction executes. In turn, the computer system masks the latency of the first thread's prolonged instruction. The system can be optimized based on the memory latency by creating more threads and further dividing a register pool amongst the threads to further hide memory latency in operations that are highly memory bound.

    摘要翻译: 提出了一种在多线程环境中隐藏内存延迟的系统和方法。 分支间接和设置链接(BISL)和/或分支间接和设置链接,如果外部数据(BISLED)指令在对应于延长的指令的实例的编译期间被放置在线程代码中。 延长的指令是指示计算机系统中的延迟,例如DMA指令。 当第一个线程遇到BISL或BISLED指令时,第一个线程在第一个线程的延长指令执行时将控制传递给第二个线程。 反过来,计算机系统掩盖了第一个线程延长的指令的延迟。 可以通过创建更多线程并在线程之间进一步划分寄存器池来进一步隐藏高度内存限制的操作中的内存延迟,从而基于内存延迟来优化系统。

    System and method for compiling source code for multi-processor environments
    5.
    发明申请
    System and method for compiling source code for multi-processor environments 审中-公开
    用于编译多处理器环境的源代码的系统和方法

    公开(公告)号:US20050071828A1

    公开(公告)日:2005-03-31

    申请号:US10671056

    申请日:2003-09-25

    IPC分类号: G06F9/45

    CPC分类号: G06F9/44547 G06F8/447

    摘要: A system and method for compiling source code for multi-processor environments is presented. Source code is compiled which creates an object file whereby the object file includes multiple object code subtasks. Source code subtasks are compiled into object code subtasks using one of three approaches which are 1) a lowbrow approach, 2) a brute force approach, and 3) a program directive approach. Each object code subtask is formatted to run on a particular processor type with a particular architecture, such as a microprocessor-based architecture or a digital signal processor-based architecture. During runtime, each object code is loaded onto its corresponding processor type for execution.

    摘要翻译: 介绍了一种用于编译多处理器环境的源代码的系统和方法。 源代码被编译,其创建目标文件,由此目标文件包括多个目标代码子任务。 使用三种方法之一将源代码子任务编译为目标代码子任务,这三种方法之一是低级方法,2)强力方法,以及3)程序指令方法。 每个目标代码子任务被格式化为在具有特定架构的特定处理器类型上运行,例如基于微处理器的架构或基于数字信号处理器的架构。 在运行时,每个目标代码被加载到其相应的处理器类型上以供执行。

    System and method for task queue management of virtual devices using a plurality of processors
    6.
    发明申请
    System and method for task queue management of virtual devices using a plurality of processors 失效
    使用多个处理器的虚拟设备的任务队列管理的系统和方法

    公开(公告)号:US20050081202A1

    公开(公告)日:2005-04-14

    申请号:US10670838

    申请日:2003-09-25

    IPC分类号: G06F9/46

    CPC分类号: G06F9/505

    摘要: A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.

    摘要翻译: 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。

    System and method for dynamically partitioning processing across plurality of heterogeneous processors
    7.
    发明申请
    System and method for dynamically partitioning processing across plurality of heterogeneous processors 失效
    用于在多个异构处理器之间动态分割处理的系统和方法

    公开(公告)号:US20050081181A1

    公开(公告)日:2005-04-14

    申请号:US10670824

    申请日:2003-09-25

    摘要: A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor.

    摘要翻译: 一个程序进入至少两个对象文件:一个对象文件,用于每个受支持的处理器环境。 在编译过程中,将数据位置,计算强度和数据并行等代码特征分析并记录在目标文件中。 在运行时间期间,代码特征与运行时考虑相结合,例如处理器上的当前负载和正在处理的数据的大小,以达到总体值。 然后,总体值用于确定哪些处理器将被分配任务。 这些值基于各种处理器的特性分配。 例如,如果一个处理器更好地处理针对大量数据流的密集计算,则高度计算密集的程序和处理大量数据的程序对该处理器进行加权。 然后在分配的处理器上加载和执行相应的对象。

    System and method for virtual devices using a plurality of processors
    8.
    发明申请
    System and method for virtual devices using a plurality of processors 失效
    使用多个处理器的虚拟设备的系统和方法

    公开(公告)号:US20050071526A1

    公开(公告)日:2005-03-31

    申请号:US10670835

    申请日:2003-09-25

    CPC分类号: G06F9/4843 G06F9/544

    摘要: A system and method is provided to allow virtual devices that use a plurality of processors in a multiprocessor systems, such as the BE environment. Using this method, a synergistic processing unit (SPU) can either be dedicated to performing a particular function (i.e., audio, video, etc.) or a single SPU can be programmed to perform several functions on behalf of the other processors in the system. The application, preferably running in one of the primary (PU) processors, issues IOCTL commands through device drivers that correspond to SPUs. The kernel managing the primary processors responds by sending an appropriate message to the SPU that is performing the dedicated function. Using this method, an SPU can be virtualized for swapping multiple tasks or dedicated to performing a particular task.

    摘要翻译: 提供了一种系统和方法,以允许在诸如BE环境的多处理器系统中使用多个处理器的虚拟设备。 使用这种方法,协同处理单元(SPU)可以专用于执行特定功能(即,音频,视频等),或者单个SPU可被编程为代表系统中的其他处理器执行若干功能 。 优选地,在主(PU)处理器之一中运行的应用通过对应于SPU的设备驱动器发出IOCTL命令。 管理主处理器的内核通过向执行专用功能的SPU发送适当的消息来做出响应。 使用此方法,可以将SPU虚拟化用于交换多个任务或专用于执行特定任务。

    Apparatus and method for performing externally assisted calls in a heterogeneous processing complex
    9.
    发明申请
    Apparatus and method for performing externally assisted calls in a heterogeneous processing complex 失效
    在异构处理复合体中执行外部辅助呼叫的装置和方法

    公开(公告)号:US20070104204A1

    公开(公告)日:2007-05-10

    申请号:US11269290

    申请日:2005-11-08

    IPC分类号: H04L12/56

    CPC分类号: G06F9/547

    摘要: An apparatus and method are provided for accessing, by an application running on a first processor, operating system services from an operating system running on a second processor by performing an assisted call. A data plane processor first constructs a parameter area based on the input and output parameters for the function that requires control processor assistance. The current values for the input parameters are copied into the parameter area. An assisted call message is generated based on a combination of a pointer to the parameter area and a specific library function opcode for the library function that is being called. The assisted call message is placed into the processor's stack immediately following a stop-and-signal instruction. The control plane processor is signaled to perform the library function corresponding to the opcode on behalf of the data plane processor by executing a stop and signal instruction.

    摘要翻译: 提供了一种装置和方法,用于通过执行辅助呼叫从运行在第一处理器上的应用程序访问来自在第二处理器上运行的操作系统的操作系统服务。 数据平面处理器首先根据需要控制处理器辅助的功能的输入和输出参数来构建参数区域。 输入参数的当前值被复制到参数区域。 基于指向参数区域的指针和正被调用的库函数的特定库函数操作码的组合,生成辅助呼叫消息。 辅助呼叫消息在紧跟停止信号指令之后立即放入处理器的堆栈中。 通过执行停止和信号指令,用信号通知控制平面处理器代表数据平面处理器执行对应于操作码的库功能。

    System and method for managing position independent code using a software framework
    10.
    发明申请
    System and method for managing position independent code using a software framework 失效
    使用软件框架管理与位置无关的代码的系统和方法

    公开(公告)号:US20060112368A1

    公开(公告)日:2006-05-25

    申请号:US10988288

    申请日:2004-11-12

    IPC分类号: G06F9/44

    CPC分类号: G06F9/44526

    摘要: A system and method for managing position independent code using a software framework is presented. A software framework provides the ability to cache multiple plug-in's which are loaded in a processor's local storage. A processor receives a command or data stream from another processor, which includes information corresponding to a particular plug-in. The processor uses the plug-in identifier to load the plug-in from shared memory into local memory before it is required in order to minimize latency. When the data stream requests the processor to use the plug-in, the processor retrieves a location offset corresponding to the plug-in and applies the plug-in to the data stream. A plug-in manager manages an entry point table that identifies memory locations corresponding to each plug-in and, therefore, plug-ins may be placed anywhere in a processor's local memory.

    摘要翻译: 提出了一种使用软件框架管理与位置无关的代码的系统和方法。 软件框架提供了缓存加载在处理器本地存储中的多个插件的能力。 处理器从另一处理器接收命令或数据流,其包括对应于特定插件的信息。 处理器使用插件标识符在必需之前将插件从共享内存加载到本地内存中,以便最小化延迟。 当数据流请求处理器使用插件时,处理器检索对应于插件的位置偏移并将插件应用于数据流。 插件管理器管理一个入口点表,用于标识与每个插件相对应的存储位置,因此插件可以放置在处理器的本地存储器中的任何位置。