Managing Position Independent Code Using a Software Framework
    61.
    发明申请
    Managing Position Independent Code Using a Software Framework 有权
    使用软件框架管理与位置无关的代码

    公开(公告)号:US20080163155A1

    公开(公告)日:2008-07-03

    申请号:US12049202

    申请日:2008-03-14

    CPC classification number: G06F9/44526

    Abstract: An approach for managing position independent code using a software framework is presented. A software framework provides the ability to cache multiple plug-in's which are loaded in a processor's local storage. A processor receives a command or data stream from another processor, which includes information corresponding to a particular plug-in. The processor uses the plug-in identifier to load the plug-in from shared memory into local memory before it is required in order to minimize latency. When the data stream requests the processor to use the plug-in, the processor retrieves a location offset corresponding to the plug-in and applies the plug-in to the data stream. A plug-in manager manages an entry point table that identifies memory locations corresponding to each plug-in and, therefore, plug-ins may be placed anywhere in a processor's local memory.

    Abstract translation: 提出了一种使用软件框架管理与位置无关的代码的方法。 软件框架提供了缓存加载在处理器本地存储中的多个插件的能力。 处理器从另一处理器接收命令或数据流,其包括对应于特定插件的信息。 处理器使用插件标识符在必需之前将插件从共享内存加载到本地内存中,以便最小化延迟。 当数据流请求处理器使用插件时,处理器检索对应于插件的位置偏移并将插件应用于数据流。 插件管理器管理一个入口点表,用于标识与每个插件相对应的存储位置,因此插件可以放置在处理器的本地存储器中的任何位置。

    Task Queue Management of Virtual Devices Using a Plurality of Processors
    62.
    发明申请
    Task Queue Management of Virtual Devices Using a Plurality of Processors 审中-公开
    使用多个处理器的虚拟设备的任务队列管理

    公开(公告)号:US20080162834A1

    公开(公告)日:2008-07-03

    申请号:US12049295

    申请日:2008-03-15

    CPC classification number: G06F9/505

    Abstract: A task queue manager manages the task queues corresponding to virtual devices. When a virtual device function is requested, the task queue manager determines whether an SPU is currently assigned to the virtual device task. If an SPU is already assigned, the request is queued in a task queue being read by the SPU. If an SPU has not been assigned, the task queue manager assigns one of the SPUs to the task queue. The queue manager assigns the task based upon which SPU is least busy as well as whether one of the SPUs recently performed the virtual device function. If an SPU recently performed the virtual device function, it is more likely that the code used to perform the function is still in the SPU's local memory and will not have to be retrieved from shared common memory using DMA operations.

    Abstract translation: 任务队列管理器管理与虚拟设备相对应的任务队列。 当请求虚拟设备功能时,任务队列管理器确定SPU当前是否被分配给虚拟设备任务。 如果已经分配了SPU,则该请求在SPU所读取的任务队列中排队。 如果尚未分配SPU,则任务队列管理器将其中一个SPU分配给任务队列。 队列管理器根据哪个SPU最不忙,以及一个SPU最近是否执行了虚拟设备功能来分配任务。 如果SPU最近执行了虚拟设备功能,则用于执行该功能的代码更有可能仍在SPU的本地存储器中,并且不需要使用DMA操作从共享的公共存储器中检索。

    System and method for grouping processors and assigning shared memory space to a group in heterogeneous computer environment
    64.
    发明授权
    System and method for grouping processors and assigning shared memory space to a group in heterogeneous computer environment 有权
    用于将处理器分组并将共享内存空间分配给异构计算机环境中的组的系统和方法

    公开(公告)号:US07389508B2

    公开(公告)日:2008-06-17

    申请号:US10670833

    申请日:2003-09-25

    CPC classification number: G06F9/5061 G06F2209/5012

    Abstract: A system and method for grouping processors is presented. A processing unit (PU) initiates an application and identifies the application's requirements. The PU assigns one or more synergistic processing units (SPUs) and a memory space to the application in the form of a group. The application specifies whether the task requires shared memory or private memory. Shared memory is a memory space that is accessible by the SPUs and the PU. Private memory, however, is a memory space that is only accessible by the SPUs that are included in the group. When the application executes, the resources within the group are allocated to the application's execution thread. Each group has its own group properties, such as address space, policies (i.e. real-time, FIFO, run-to-completion, etc.) and priority (i.e. low or high). These group properties are used during thread execution to determine which groups take precedence over other tasks.

    Abstract translation: 提出了一种用于分组处理器的系统和方法。 处理单元(PU)启动应用程序并识别应用程序的要求。 PU以组的形式向应用分配一个或多个协同处理单元(SPU)和存储器空间。 应用程序指定任务是否需要共享内存或专用内存。 共享内存是可由SPU和PU访问的内存空间。 然而,专用内存是只能由组中包含的SPU访问的内存空间。 当应用程序执行时,组内的资源被分配给应用程序的执行线程。 每个组都有自己的组属性,如地址空间,策略(即实时,FIFO,运行完成等)和优先级(即低或高)。 在线程执行期间使用这些组属性来确定哪些组优先于其他任务。

    System and method for virtualization of processor resources
    65.
    发明授权
    System and method for virtualization of processor resources 有权
    处理器资源虚拟化的系统和方法

    公开(公告)号:US07290112B2

    公开(公告)日:2007-10-30

    申请号:US10955093

    申请日:2004-09-30

    CPC classification number: G06F12/109 G06F12/0284 G06F12/1045

    Abstract: A system and method for virtualization of processor resources is presented. A thread is created on a processor and the processor's local memory is mapped into an effective address space. In doing so, the processor's local memory is accessible by other processors, regardless of whether the processor is running. Additional threads create additional local memory mappings into the effective address space. The effective address space corresponds to either a physical local memory or a “soft” copy area. When the processor is running, a different processor may access data that is located in the first processor's local memory from the processor's local storage area. When the processor is not running, a softcopy of the processor's local memory is stored in a memory location (i.e. locked cache memory, pinned system memory, virtual memory, etc.) for other processors to continue accessing.

    Abstract translation: 提出了一种用于处理器资源虚拟化的系统和方法。 在处理器上创建线程,并将处理器的本地内存映射到有效的地址空间。 这样做,处理器的本地内存可以由其他处理器访问,无论处理器是否正在运行。 附加线程会在有效地址空间中创建额外的本地内存映射。 有效地址空间对应于物理本地存储器或“软”复制区域。 当处理器运行时,不同的处理器可以从处理器的本地存储区域访问位于第一处理器的本地存储器中的数据。 当处理器未运行时,处理器的本地存储器的软拷贝存储在其他处理器的存储器位置(即锁定的高速缓冲存储器,固定的系统存储器,虚拟存储器等)中以继续访问。

    System and method for processor thread acting as a system service processor
    66.
    发明授权
    System and method for processor thread acting as a system service processor 失效
    处理器线程的系统和方法作为系统服务处理器

    公开(公告)号:US07146529B2

    公开(公告)日:2006-12-05

    申请号:US10670843

    申请日:2003-09-25

    Abstract: A system and method for a processor thread acting as a system service provider is presented. A computer system boots up and initiates a service thread. The service thread is responsible for service related tasks, such as ECC checks and hardware log error checks. The service provider invokes a second thread which is used as an operational thread. The operational thread loads an operating system, a kernel, and runs various applications. While the operational thread executes, the service thread monitors the operational thread for proper functionality as well as monitoring service events. When the service thread detects a problem with either one of the service events or the operational thread, the service thread may choose to store operational data corresponding to the operational thread and terminates the operational thread.

    Abstract translation: 提出了一种用作系统服务提供商的处理器线程的系统和方法。 计算机系统启动并启动服务线程。 服务线程负责服务相关任务,如ECC检查和硬件日志错误检查。 服务提供者调用用作操作线程的第二个线程。 操作线程加载操作系统,内核,并运行各种应用程序。 当操作线程执行时,服务线程监视操作线程以获取正确的功能以及监视服务事件。 当服务线程检测到任一服务事件或操作线程的问题时,服务线程可以选择存储对应于操作线程的操作数据并终止操作线程。

    Method and apparatus for graphics window clipping management in a data processing system
    67.
    发明授权
    Method and apparatus for graphics window clipping management in a data processing system 失效
    数据处理系统中图形窗口剪辑管理的方法和装置

    公开(公告)号:US06831660B1

    公开(公告)日:2004-12-14

    申请号:US09595349

    申请日:2000-06-15

    CPC classification number: G06T11/00 G09G5/026

    Abstract: A method and apparatus in a data processing system for processing graphics data. A set of clip areas defining a window for use in clipping graphics data is identified in which a portion of the graphics data is obscured. A clip area in a first hardware clipper is set, wherein the clip area encompasses the window to process the graphics data. The graphics data within the first clip area is graphics data to be displayed. A no clip area is set in a second hardware clipper, wherein the no clip area encompasses the portion and wherein which graphics data in the second clip area is to remain undisplayed. The graphics data is sent to the first hardware clipper and the second hardware clipper.

    Abstract translation: 用于处理图形数据的数据处理系统中的方法和装置。 识别定义用于剪切图形数据的窗口的一组剪辑区域,其中图形数据的一部分被遮蔽。 设置第一硬件裁剪器中的剪辑区域,其中剪辑区域包括用于处理图形数据的窗口。 第一剪辑区域内的图形数据是要显示的图形数据。 在第二硬件裁剪器中设置无剪辑区域,其中,无剪辑区域包含该部分,并且其中第二剪辑区域中的哪个图形数据将保持不显示。 图形数据被发送到第一个硬件裁剪器和第二个硬件裁剪器。

    SPE software instruction cache
    68.
    发明授权
    SPE software instruction cache 有权
    SPE软件指令缓存

    公开(公告)号:US08516230B2

    公开(公告)日:2013-08-20

    申请号:US12648741

    申请日:2009-12-29

    CPC classification number: G06F9/3804 G06F9/30047 G06F12/0875

    Abstract: An application thread executes a direct branch instruction that is stored in an instruction cache line. Upon execution, the direct branch instruction branches to a branch descriptor that is also stored in the instruction cache line. The branch descriptor includes a trampoline branch instruction and a target instruction space address. Next, the trampoline branch instruction sends a branch descriptor pointer, which points to the branch descriptor, to an instruction cache manager. The instruction cache manager extracts the target instruction space address from the branch descriptor, and executes a target instruction corresponding to the target instruction space address. In one embodiment, the instruction cache manager generates a target local store address by masking off a portion of bits included in the target instruction space address. In turn, the application thread executes the target instruction located at the target local store address accordingly.

    Abstract translation: 应用程序线程执行存储在指令高速缓存行中的直接转移指令。 在执行时,直接分支指令分支到也存储在指令高速缓存行中的分支描述符。 分支描述符包括蹦床分支指令和目标指令空间地址。 接下来,蹦床分支指令将指向分支描述符的分支描述符指针发送到指令高速缓存管理器。 指令高速缓存管理器从分支描述符中提取目标指令空间地址,并且执行与目标指令空间地址相对应的目标指令。 在一个实施例中,指令高速缓存管理器通过掩蔽包含在目标指令空间地址中的位的一部分来生成目标本地存储地址。 反过来,应用程序线程相应地执行位于目标本地存储地址的目标指令。

    Dynamically partitioning processing across a plurality of heterogeneous processors
    70.
    发明授权
    Dynamically partitioning processing across a plurality of heterogeneous processors 失效
    跨多个异构处理器的动态分区处理

    公开(公告)号:US08091078B2

    公开(公告)日:2012-01-03

    申请号:US12116628

    申请日:2008-05-07

    Abstract: A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor.

    Abstract translation: 一个程序进入至少两个对象文件:一个对象文件,用于每个受支持的处理器环境。 在编译过程中,将数据位置,计算强度和数据并行等代码特征分析并记录在目标文件中。 在运行时间期间,代码特征与运行时考虑相结合,例如处理器上的当前负载和正在处理的数据的大小,以达到总体值。 然后,总体值用于确定哪些处理器将被分配任务。 这些值基于各种处理器的特性分配。 例如,如果一个处理器更好地处理针对大量数据流的密集计算,则高度计算密集的程序和处理大量数据的程序对该处理器进行加权。 然后在分配的处理器上加载和执行相应的对象。

Patent Agency Ranking