System and method for providing a persistent function server
    1.
    发明授权
    System and method for providing a persistent function server 失效
    用于提供持久性功能服务器的系统和方法

    公开(公告)号:US07240182B2

    公开(公告)日:2007-07-03

    申请号:US10942432

    申请日:2004-09-16

    IPC分类号: G06F9/40

    CPC分类号: G06F9/544 G06F8/52

    摘要: A system and method for providing a persistent function server is provided. A multi-processor environment uses an interface definition language (idl) file to describe a particular function, such as an “add” function. A compiler uses the idl file to generate source code for use in marshalling and de-marshalling data between a main processor and a support processor. A header file is also created that corresponds to the particular function. The main processor includes parameters in the header file and sends the header file to the support processor. For example, a main processor may include two numbers in an “add” header file and send the “add” header file to a support processor that is responsible for performing math functions. In addition, the persistent function server capability of the support processor is programmable such that the support processor may be assigned to execute unique and complex functions.

    摘要翻译: 提供了一种用于提供持久功能服务器的系统和方法。 多处理器环境使用接口定义语言(idl)文件来描述特定功能,例如“添加”功能。 编译器使用idl文件生成源代码,用于在主处理器和支持处理器之间编组和解组数据。 还创建了与特定功能对应的头文件。 主处理器包括头文件中的参数,并将头文件发送到支持处理器。 例如,主处理器可以在“添加”头文件中包括两个数字,并将“添加”头文件发送到负责执行数学函数的支持处理器。 此外,支持处理器的持久功能服务器能力是可编程的,使得支持处理器可被分配以执行独特和复杂的功能。

    Task queue management of virtual devices using a plurality of processors
    2.
    发明授权
    Task queue management of virtual devices using a plurality of processors 失效
    使用多个处理器的虚拟设备的任务队列管理

    公开(公告)号:US07478390B2

    公开(公告)日:2009-01-13

    申请号:US10670838

    申请日:2003-09-25

    IPC分类号: G06F9/46 G06F13/00 G06F13/24

    CPC分类号: G06F9/505

    摘要: A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.

    摘要翻译: 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。

    Task Queue Management of Virtual Devices Using a Plurality of Processors
    3.
    发明申请
    Task Queue Management of Virtual Devices Using a Plurality of Processors 审中-公开
    使用多个处理器的虚拟设备的任务队列管理

    公开(公告)号:US20080162834A1

    公开(公告)日:2008-07-03

    申请号:US12049295

    申请日:2008-03-15

    CPC分类号: G06F9/505

    摘要: A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.

    摘要翻译: 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。

    System and method for solving a large system of dense linear equations
    4.
    发明授权
    System and method for solving a large system of dense linear equations 有权
    用于求解大密度线性方程组的系统和方法

    公开(公告)号:US07236998B2

    公开(公告)日:2007-06-26

    申请号:US10670837

    申请日:2003-09-25

    IPC分类号: G06F7/38

    CPC分类号: G06F17/16

    摘要: A method and system for solving a large system of dense linear equations using a system having a processing unit and one or more secondary processing units that can access a common memory for sharing data. A set of coefficients corresponding to a system of linear equations is received, and the coefficients, after being placed in matrix form, are divided into blocks and loaded into the common memory. Each of the processors is programmed to perform matrix operations on individual blocks to solve the linear equations. A table containing a list of the matrix operations is created in the common memory to keep track of the operations that have been performed and the operations that are still pending. SPUs determine whether tasks are pending, access the coefficients by accessing the common memory, perform the required tasks, and store the result back in the common memory for the result to be accessible by the PU and the other SPUs.

    摘要翻译: 一种使用具有处理单元和一个或多个辅助处理单元的系统来解决大密度线性方程组的方法和系统,所述二次处理单元可以访问公共存储器以共享数据。 接收与线性方程组对应的一系列系数,将这些系数置于矩阵形式之后,分成块并加载到公共存储器中。 每个处理器被编程为对各个块执行矩阵运算以求线性方程。 在公共内存中创建包含矩阵操作列表的表,以跟踪已执行的操作和仍在挂起的操作。 SPU确定任务是否正在等待,通过访问公共存储器访问系数,执行所需任务,并将结果存储在公共存储器中,以使结果可由PU和其他SPU访问。

    Balancing computational load across a plurality of processors
    5.
    发明授权
    Balancing computational load across a plurality of processors 失效
    平衡跨多个处理器的计算负载

    公开(公告)号:US07694306B2

    公开(公告)日:2010-04-06

    申请号:US12145709

    申请日:2008-06-25

    IPC分类号: G06F9/46 G06F9/44 G06F9/45

    CPC分类号: G06F9/5044

    摘要: Computational load is balanced across a plurality of processors. Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.

    摘要翻译: 计算负载在多个处理器之间平衡。 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。

    Balancing Computational Load Across a Plurality of Processors
    6.
    发明申请
    Balancing Computational Load Across a Plurality of Processors 失效
    平衡跨多个处理器的计算负载

    公开(公告)号:US20080271003A1

    公开(公告)日:2008-10-30

    申请号:US12145709

    申请日:2008-06-25

    IPC分类号: G06F9/45

    CPC分类号: G06F9/5044

    摘要: Computational load is balanced across a plurality of processors. Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.

    摘要翻译: 计算负载在多个处理器之间平衡。 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。

    Balancing computational load across a plurality of processors
    7.
    发明授权
    Balancing computational load across a plurality of processors 失效
    平衡跨多个处理器的计算负载

    公开(公告)号:US07444632B2

    公开(公告)日:2008-10-28

    申请号:US10670826

    申请日:2003-09-25

    IPC分类号: G06F9/46 G06F9/44

    CPC分类号: G06F9/5044

    摘要: Source code subtasks are compiled into byte code subtasks whereby the byte code subtasks are translated into processor-specific object code subtasks at runtime. The processor-type selection is based upon one of three approaches which are 1) a brute force approach, 2) higher-level approach, or 3) processor availability approach. Each object code subtask is loaded in a corresponding processor type for execution. In one embodiment, a compiler stores a pointer in a byte code file that references the location of a byte code subtask. In this embodiment, the byte code subtask is stored in a shared library and, at runtime, a runtime loader uses the pointer to identify the location of the byte code subtask in order to translate the byte code subtask.

    摘要翻译: 源代码子任务被编译成字节代码子任务,从而在运行时将字节代码子任务转换为处理器特定目标代码子任务。 处理器类型选择基于以下三种方法之一:1)强力方法,2)较高级别的方法,或3)处理器可用性方法。 每个对象代码子任务都以相应的处理器类型加载以执行。 在一个实施例中,编译器将指针存储在引用字节代码子任务的位置的字节代码文件中。 在本实施例中,字节代码子任务存储在共享库中,并且在运行时,运行时加载器使用指针来标识字节代码子任务的位置,以便翻译字节代码子任务。

    Adaptive sampling of a static data set
    8.
    发明授权
    Adaptive sampling of a static data set 有权
    静态数据集的自适应采样

    公开(公告)号:US07515152B2

    公开(公告)日:2009-04-07

    申请号:US11948240

    申请日:2007-11-30

    IPC分类号: G06T15/20

    CPC分类号: G06T15/06 G06T15/40

    摘要: A sampling module that adjusts the sampling density of a static data set. Two or more rays are cast onto a surface from a single point of origin. The ray or rays intersect the surface at various locations. The distance between the intersection points of each pair of adjacent rays is calculated. This distance is the current sample density. The current sample density is compared to the desired sample density. If the current sample density is not equal to the desired sample density then the sample density of the next casting of rays is adjusted accordingly.

    摘要翻译: 调整静态数据集的采样密度的采样模块。 从单个原点将两根或更多根光线投射到表面上。 射线或射线在各个位置与表面相交。 计算每对相邻射线的交点之间的距离。 该距离是当前样本密度。 将当前样品密度与期望的样品密度进行比较。 如果当前的样品密度不等于所需的样品密度,则相应地调整下一次射线的样品密度。

    Adaptive sampling of a static data set
    9.
    发明授权
    Adaptive sampling of a static data set 有权
    静态数据集的自适应采样

    公开(公告)号:US07345687B2

    公开(公告)日:2008-03-18

    申请号:US11204423

    申请日:2005-08-16

    IPC分类号: G06T15/10

    CPC分类号: G06T15/06 G06T15/40

    摘要: A sampling module is provided. Two or more rays are cast onto a surface from a single point of origin. The ray or rays intersect the surface at various locations. The distance between the intersection points of each pair of adjacent rays is calculated. This distance is the current sample density. The current sample density is compared to the desired sample density. If the current sample density is not equal to the desired sample density then the sample density of the next casting of rays is adjusted accordingly.

    摘要翻译: 提供采样模块。 从单个原点将两根或更多根光线投射到表面上。 射线或射线在各个位置与表面相交。 计算每对相邻射线的交点之间的距离。 该距离是当前样本密度。 将当前样品密度与期望的样品密度进行比较。 如果当前的样品密度不等于所需的样品密度,则相应地调整下一次射线的样品密度。

    Ray tracing with depth buffered display
    10.
    发明授权
    Ray tracing with depth buffered display 失效
    光线跟踪与深度缓冲显示

    公开(公告)号:US07439973B2

    公开(公告)日:2008-10-21

    申请号:US11201651

    申请日:2005-08-11

    IPC分类号: G06T15/40

    摘要: An image that includes ray traced pixel data and rasterized pixel data is generated. A synergistic processing unit (SPU) uses a rendering algorithm to generate ray traced data for objects that require high-quality image rendering. The ray traced data is fragmented, whereby each fragment includes a ray traced pixel depth value and a ray traced pixel color value. A rasterizer compares ray traced pixel depth values to corresponding rasterized pixel depth values, and overwrites ray traced pixel data with rasterized pixel data when the corresponding rasterized fragment is “closer” to a viewing point, which results in composite data. A display subsystem uses the resultant composite data to generate an image on a user's display.

    摘要翻译: 生成包括光线跟踪像素数据和光栅化像素数据的图像。 协同处理单元(SPU)使用渲染算法为需要高质量图像渲染的对象生成光线跟踪数据。 光线跟踪的数据被分段,由此每个片段包括光线跟踪的像素深度值和光线跟踪的像素颜色值。 光栅化器将光线跟踪的像素深度值与相应的光栅化像素深度值进行比较,并且当对应的光栅化片段“靠近”到观察点时,将光栅跟踪的像素数据重写为光栅跟踪像素数据,这导致复合数据。 显示子系统使用所得到的复合数据在用户的显示器上生成图像。