System for iterative interactive ray tracing in a multiprocessor environment
    1.
    发明授权
    System for iterative interactive ray tracing in a multiprocessor environment 失效
    用于多处理器环境中迭代交互光线跟踪的系统

    公开(公告)号:US08525826B2

    公开(公告)日:2013-09-03

    申请号:US12188290

    申请日:2008-08-08

    CPC classification number: G06T15/06 G06T15/50 G06T2210/52

    Abstract: A method comprises receiving scene model data including a scene geometry model and a plurality of pixel data describing objects arranged in a scene. The method generates a primary ray based on a selected first pixel data. In the event the primary ray intersects an object in the scene, the method determines primary hit color data and generates a plurality of secondary rays. The method groups the secondary packets and arranges the packets in a queue based on the octant of each direction vector in the secondary ray packet. The method generates secondary color data based on the secondary ray packets in the queue and generates a pixel color based on the primary hit color data, and the secondary color data. The method generates an image based on the pixel color for the pixel data.

    Abstract translation: 一种方法包括接收包括场景几何模型和描述在场景中布置的对象的多个像素数据的场景模型数据。 该方法基于所选择的第一像素数据生成主射线。 在主要射线与场景中的物体相交的情况下,该方法确定主要命中颜色数据并产生多个次要射线。 该方法对二次分组进行分组,并根据二次射线分组中每个方向向量的八分圆排列队列中的分组。 该方法基于队列中的二次射线包生成次色数据,并根据主要命中颜色数据和次要颜色数据生成像素颜色。 该方法基于像素数据的像素颜色生成图像。

    Task switching based on a shared memory condition associated with a data request and detecting lock line reservation lost events
    2.
    发明授权
    Task switching based on a shared memory condition associated with a data request and detecting lock line reservation lost events 失效
    基于与数据请求相关联的共享存储器条件和检测锁定线路保留丢失事件的任务切换

    公开(公告)号:US08458707B2

    公开(公告)日:2013-06-04

    申请号:US12049317

    申请日:2008-03-15

    CPC classification number: G06F12/0842 G06F9/526

    Abstract: An approach that uses a handler to detect asynchronous lock line reservation lost events, and switching tasks based upon whether a condition is true or a mutex lock is acquired is presented. A synergistic processing unit (SPU) invokes a first thread and, during execution, the first thread requests external data that is shared with other threads or processors in the system. This shared data may be protected with a mutex lock or other shared memory synchronization constructs. When requested data is not available, the SPU switches to a second thread and monitors lock line reservation lost events in order to check when the data is available. When the data is available, the SPU switches back to the first thread and processes the first thread's request.

    Abstract translation: 提出了一种使用处理程序来检测异步锁定线路保留丢失事件的方法,以及基于条件是真实还是获取互斥锁来切换任务。 协同处理单元(SPU)调用第一个线程,并且在执行期间,第一个线程请求与系统中的其他线程或处理器共享的外部数据。 该共享数据可以用互斥锁或其他共享内存同步结构来保护。 当请求的数据不可用时,SPU切换到第二个线程并监视锁定线路保留丢失事件,以便检查数据可用时间。 当数据可用时,SPU切换回第一个线程并处理第一个线程的请求。

    Efficient Multi-Level Software Cache Using SIMD Vector Permute Functionality
    3.
    发明申请
    Efficient Multi-Level Software Cache Using SIMD Vector Permute Functionality 有权
    使用SIMD向量权限功能的高效多级软件缓存

    公开(公告)号:US20110161548A1

    公开(公告)日:2011-06-30

    申请号:US12648667

    申请日:2009-12-29

    Abstract: A cache manager receives a request for data, which includes a requested effective address. The cache manager determines whether the requested effective address matches a most recently used effective address stored in a mapped tag vector. When the most recently used effective address matches the requested effective address, the cache manager identifies a corresponding cache location and retrieves the data from the identified cache location. However, when the most recently used effective address fails to match the requested effective address, the cache manager determines whether the requested effective address matches a subsequent effective address stored in the mapped tag vector. When the cache manager determines a match to a subsequent effective address, the cache manager identifies a different cache location corresponding to the subsequent effective address and retrieves the data from the different cache location.

    Abstract translation: 缓存管理器接收对数据的请求,其中包括请求的有效地址。 高速缓存管理器确定所请求的有效地址是否匹配存储在映射的标签向量中的最近使用的有效地址。 当最近使用的有效地址与所请求的有效地址匹配时,高速缓存管理器识别对应的高速缓存位置并从所识别的高速缓存位置检索数据。 然而,当最近使用的有效地址不能匹配所请求的有效地址时,高速缓存管理器确定所请求的有效地址是否匹配存储在映射的标签向量中的后续有效地址。 当高速缓存管理器确定与随后的有效地址的匹配时,高速缓存管理器识别与随后的有效地址相对应的不同高速缓存位置,并从不同的高速缓存位置检索数据。

    Loading software on a plurality of processors
    4.
    发明授权
    Loading software on a plurality of processors 失效
    在多个处理器上加载软件

    公开(公告)号:US07748006B2

    公开(公告)日:2010-06-29

    申请号:US12131348

    申请日:2008-06-02

    CPC classification number: G06F9/44557 G06F9/44526

    Abstract: Loading software on a plurality of processors is presented. A processing unit (PU) retrieves a file from system memory and loads it into its internal memory. The PU extracts a processor type from the file's header which identifies whether the file should execute on the PU or a synergistic processing unit (SPU). If an SPU should execute the file, the PU DMA's the file to the SPU for execution. In one embodiment, the file is a combined file which includes both PU and SPU code. In this embodiment, the PU identifies one or more section headers included in the file which indicates embedded SPU code within the combined file. In this embodiment, the PU extracts the SPU code from the combined file and DMA's the extracted code to an SPU for execution.

    Abstract translation: 在多个处理器上加载软件。 处理单元(PU)从系统存储器检索文件并将其加载到其内部存储器中。 PU从文件头中提取一种处理器类型,用于标识文件是否应在PU或协同处理单元(SPU)上执行。 如果SPU应该执行该文件,PU DMA将该文件提交给SPU执行。 在一个实施例中,该文件是包括PU和SPU代码的组合文件。 在该实施例中,PU识别包括在文件中的一个或多个区段标题,其指示组合文件内的嵌入式SPU代码。 在本实施例中,PU从组合文件中提取SPU代码,并将提取的代码DMA提取给SPU以执行。

    Hiding memory latency
    5.
    发明授权
    Hiding memory latency 失效
    隐藏内存延迟

    公开(公告)号:US07620951B2

    公开(公告)日:2009-11-17

    申请号:US12049293

    申请日:2008-03-15

    CPC classification number: G06F9/322 G06F8/41 G06F9/3851

    Abstract: An approach to hiding memory latency in a multi-thread environment is presented. Branch Indirect and Set Link (BISL) and/or Branch Indirect and Set Link if External Data (BISLED) instructions are placed in thread code during compilation at instances that correspond to a prolonged instruction. A prolonged instruction is an instruction that instigates latency in a computer system, such as a DMA instruction. When a first thread encounters a BISL or a BISLED instruction, the first thread passes control to a second thread while the first thread's prolonged instruction executes. In turn, the computer system masks the latency of the first thread's prolonged instruction. The system can be optimized based on the memory latency by creating more threads and further dividing a register pool amongst the threads to further hide memory latency in operations that are highly memory bound.

    Abstract translation: 介绍了一种在多线程环境中隐藏内存延迟的方法。 分支间接和设置链接(BISL)和/或分支间接和设置链接,如果外部数据(BISLED)指令在对应于延长的指令的实例的编译期间被放置在线程代码中。 延长的指令是指示计算机系统中的延迟,例如DMA指令。 当第一个线程遇到BISL或BISLED指令时,第一个线程在第一个线程的延长指令执行时将控制传递给第二个线程。 反过来,计算机系统掩盖了第一个线程延长的指令的延迟。 可以通过创建更多线程并在线程之间进一步划分寄存器池来进一步隐藏高度内存限制的操作中的内存延迟,从而可以基于内存延迟来优化系统。

    Light weight context switching
    6.
    发明授权
    Light weight context switching 失效
    轻量级上下文切换

    公开(公告)号:US07565659B2

    公开(公告)日:2009-07-21

    申请号:US10891773

    申请日:2004-07-15

    CPC classification number: G06F9/461 G06F9/485

    Abstract: To alleviate at least some of the costs associated with context switching, addition fields, either with associated Application Program Interfaces (APIs) or coupled to application modules, can be employed to indicate points of light weight context during the operation of an application. Therefore, an operating system can pre-empt applications at points where the context is relatively light, reducing the costs on both storage and bus usage.

    Abstract translation: 为了减轻与上下文切换相关联的至少一些成本,可以使用具有相关联的应用程序接口(API)或耦合到应用模块的附加字段来在应用的操作期间指示轻重量上下文的点。 因此,操作系统可以在上下文相对较轻的点预占应用程序,从而降低存储和总线使用的成本。

    Managing a Plurality of Processors as Devices
    7.
    发明申请
    Managing a Plurality of Processors as Devices 有权
    将多个处理器作为设备进行管理

    公开(公告)号:US20080301695A1

    公开(公告)日:2008-12-04

    申请号:US12176375

    申请日:2008-07-19

    CPC classification number: G06F9/5027 G06F2209/509

    Abstract: A computer system's multiple processors are managed as devices. The operating system accesses the multiple processors using processor device modules loaded into the operating system to facilitate a communication between an application requesting access to a processor and the processor. A device-like access is determined for accessing each one of the processors similar to device-like access for other devices in the system such as disk drives, printers, etc. An application seeking access to a processor issues device-oriented instructions for processing data, and in addition, the application provides the processor with the data to be processed. The processor processes the data according to the instructions provided by the application.

    Abstract translation: 计算机系统的多个处理器作为设备进行管理。 操作系统使用加载到操作系统中的处理器设备模块来访问多个处理器,以便于请求对处理器的访问的应用与处理器之间的通信。 确定用于访问每个处理器的类似设备的访问,类似于系统中的其他设备(例如磁盘驱动器,打印机等)的类似设备的访问。寻求对处理器的访问的应用发出面向设备的指令以处理数据 ,此外,应用程序向处理器提供要处理的数据。 处理器根据应用程序提供的指令对数据进行处理。

    Virtual Devices Using a Plurality of Processors
    8.
    发明申请
    Virtual Devices Using a Plurality of Processors 失效
    使用多个处理器的虚拟设备

    公开(公告)号:US20080168443A1

    公开(公告)日:2008-07-10

    申请号:US12049179

    申请日:2008-03-14

    CPC classification number: G06F9/4843 G06F9/544

    Abstract: An approach is provided to allow virtual devices that use a plurality of processors in a multiprocessor systems, such as the BE environment. Using this method, a synergistic processing unit (SPU) can either be dedicated to performing a particular function (i.e., audio, video, etc.) or a single SPU can be programmed to perform several functions on behalf of the other processors in the system. The application, preferably running in one of the primary (PU) processors, issues IOCTL commands through device drivers that correspond to SPUs. The kernel managing the primary processors responds by sending an appropriate message to the SPU that is performing the dedicated function. Using this method, an SPU can be virtualized for swapping multiple tasks or dedicated to performing a particular task.

    Abstract translation: 提供了一种方法来允许在诸如BE环境的多处理器系统中使用多个处理器的虚拟设备。 使用这种方法,协同处理单元(SPU)可以专用于执行特定功能(即,音频,视频等),或者单个SPU可被编程为代表系统中的其他处理器执行若干功能 。 优选地,在主(PU)处理器之一中运行的应用通过对应于SPU的设备驱动器发出IOCTL命令。 管理主处理器的内核通过向执行专用功能的SPU发送适当的消息来做出响应。 使用此方法,可以将SPU虚拟化用于交换多个任务或专用于执行特定任务。

    Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
    9.
    发明申请
    Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment 有权
    将处理器分组并将共享内存空间分配给异构计算机环境中的组

    公开(公告)号:US20080155203A1

    公开(公告)日:2008-06-26

    申请号:US12042254

    申请日:2008-03-04

    CPC classification number: G06F9/5061 G06F2209/5012

    Abstract: Grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.

    Abstract translation: 介绍了分组处理器。 处理单元(PU)启动应用程序并识别应用程序的要求。 PU以组的形式向应用分配一个或多个协同处理单元(SPU)和存储器空间。 应用程序指定任务是否需要共享内存或专用内存。 共享内存是可由SPU和PU访问的内存空间。 然而,专用内存是只能由组中包含的SPU访问的内存空间。 当应用程序执行时,组内的资源被分配给应用程序的执行线程。 每个组都有自己的组属性,如地址空间,策略(即实时,FIFO,运行完成等)和优先级(即低或高)。 在线程执行期间使用这些组属性来确定哪些组优先于其他任务。

    System and Method for Securely Saving a Program Context to a Shared Memory
    10.
    发明申请
    System and Method for Securely Saving a Program Context to a Shared Memory 有权
    将程序上下文安全地保存到共享内存的系统和方法

    公开(公告)号:US20080066074A1

    公开(公告)日:2008-03-13

    申请号:US11530937

    申请日:2006-09-12

    CPC classification number: G06F21/71 G06F21/52 G06F2221/2105

    Abstract: A system, method and program product for securely saving a program context to a shared memory is presented. A secured program running on an special purpose processor core running in isolation mode is interrupted. The isolated special purpose processor core is included in a heterogeneous processing environment, that includes purpose processors and general purpose processor cores that each access a shared memory. In isolation mode, the special purpose processor core's local memory is inaccessible from the other heterogeneous processors. The secured program's context is securely saved to the shared memory using a random persistent security data. The lines of code stored in the isolated special purpose processor core's local memory are read along with data values, such as register settings, set by the secured program. The lines of code and data values are encrypted using the persistent security data, and the encrypted code lines and data values are stored in the shared memory.

    Abstract translation: 提出了一种用于将程序上下文安全地保存到共享存储器的系统,方法和程序产品。 在隔离模式下运行的专用处理器核心上运行的安全程序被中断。 独立的专用处理器核心包含在异构处理环境中,其中包括各自访问共享内存的目标处理器和通用处理器内核。 在隔离模式下,专用处理器核心的本地内存无法从其他异构处理器访问。 使用随机持久的安全性数据将安全程序的上下文安全地保存到共享内存中。 存储在隔离专用处理器核心的本地存储器中的代码行与安全程序设置的数据值(如寄存器设置)一起读取。 使用持久的安全数据对代码和数据值的行进行加密,并将加密的代码行和数据值存储在共享存储器中。

Patent Agency Ranking