METHODS OF PROCESSING CORE SELECTION FOR APPLICATIONS ON MANYCORE PROCESSORS
    1.
    发明申请
    METHODS OF PROCESSING CORE SELECTION FOR APPLICATIONS ON MANYCORE PROCESSORS 有权
    处理多核处理器应用的核心选择方法

    公开(公告)号:US20140208331A1

    公开(公告)日:2014-07-24

    申请号:US13858036

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A runtime method is disclosed that dynamically sets up core containers and thread-to-core affinity for processes running on manycore coprocessors. The method is completely transparent to user applications and incurs low runtime overhead. The method is implemented within a user-space middleware that also performs scheduling and resource management for both offload and native applications using the manycore coprocessors.

    Abstract translation: 公开了一种运行时方法,其动态地为在manycore协处理器上运行的进程设置核心容器和线程到核心的亲和力。 该方法对用户应用程序是完全透明的,并导致低运行时开销。 该方法在用户空间中间件中实现,该中间件还使用manycore协处理器对卸载和本机应用程序执行调度和资源管理。

    METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS
    2.
    发明申请
    METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS 有权
    方法同时安排过程和卸载计算在多个核心协处理器

    公开(公告)号:US20140208327A1

    公开(公告)日:2014-07-24

    申请号:US13858039

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A method is disclosed to manage a multi-processor system with one or more manycore devices, by managing real-time bag-of-tasks applications for a cluster, wherein each task runs on a single server node, and uses the offload programming model, and wherein each task has a deadline and three specific resource requirements: total processing time, a certain number of manycore devices and peak memory on each device; when a new task arrives, querying each node scheduler to determine which node can best accept the task and each node scheduler responds with an estimated completion time and a confidence level, wherein the node schedulers use an urgency-based heuristic to schedule each task and its offloads; responding to an accept/reject query phase, wherein the cluster scheduler send the task requirements to each node and queries if the node can accept the task with an estimated completion time and confidence level; and scheduling tasks and offloads using a aging and urgency-based heuristic, wherein the aging guarantees fairness, and the urgency prioritizes tasks and offloads so that maximal deadlines are met.

    Abstract translation: 公开了一种通过管理用于集群的实时任务应用程序来管理具有一个或多个管理器设备的多处理器系统的方法,其中每个任务在单个服务器节点上运行,并且使用卸载编程模型, 并且其中每个任务具有最后期限和三个特定的资源要求:总处理时间,一定数量的管理设备和每个设备上的峰值存储器; 当新任务到达时,查询每个节点调度器以确定哪个节点可以最好地接受任务,并且每个节点调度器以估计的完成时间和置信水平进行响应,其中节点调度器使用基于紧急度的启发式来安排每个任务及其 卸货 响应于接受/拒绝查询阶段,其中所述群集调度器向每个节点发送所述任务需求,并且查询所述节点是否可以接收具有估计完成时间和置信水平的所述任务; 并使用基于老化和紧急性的启发式调度任务和卸载,其中老化保证公平性,并且紧急性优先考虑任务和卸载,以便满足最大期限。

    COMPILER-GUIDED SOFTWARE ACCELERATOR FOR ITERATIVE HADOOP JOBS
    3.
    发明申请
    COMPILER-GUIDED SOFTWARE ACCELERATOR FOR ITERATIVE HADOOP JOBS 有权
    用于迭代HADOOP作业的编译软件加速器

    公开(公告)号:US20140047422A1

    公开(公告)日:2014-02-13

    申请号:US13923458

    申请日:2013-06-21

    CPC classification number: G06F8/443 G06F9/52 G06F9/546

    Abstract: Various methods are provided directed to a compiler-guided software accelerator for iterative HADOOP jobs. A method includes identifying intermediate data, generated by an iterative HADOOP application, below a predetermined threshold size and used less than a predetermined threshold time period. The intermediate data is stored in a memory device. The method further includes minimizing input, output, and synchronization overhead for the intermediate data by selectively using at any given time any one of a Message Passing Interface and Distributed File System as a communication layer. The Message Passing Interface is co-located with the HADOOP Distributed File System.

    Abstract translation: 针对迭代HADOOP作业的编译器引导软件加速器提供了各种方法。 一种方法包括将由迭代HADOOP应用生成的中间数据识别为低于预定阈值大小并且使用小于预定阈值时间段的中间数据。 中间数据存储在存储设备中。 该方法还包括通过在任何给定时间选择性地使用消息传递接口和分布式文件系统中的任何一个作为通信层来最小化中间数据的输入,输出和同步开销。 消息传递接口与HADOOP分布式文件系统位于同一位置。

    Compiler-guided software accelerator for iterative HADOOP® jobs
    4.
    发明授权
    Compiler-guided software accelerator for iterative HADOOP® jobs 有权
    用于迭代HADOOP®作业的编译器引导软件加速器

    公开(公告)号:US09201638B2

    公开(公告)日:2015-12-01

    申请号:US13923458

    申请日:2013-06-21

    CPC classification number: G06F8/443 G06F9/52 G06F9/546

    Abstract: Various methods are provided directed to a compiler-guided software accelerator for iterative HADOOP® jobs. A method includes identifying intermediate data, generated by an iterative HADOOP® application, below a predetermined threshold size and used less than a predetermined threshold time period. The intermediate data is stored in a memory device. The method further includes minimizing input, output, and synchronization overhead for the intermediate data by selectively using at any given time any one of a Message Passing Interface and Distributed File System as a communication layer. The Message Passing Interface is co-located with the HADOOP® Distributed File System.

    Abstract translation: 针对迭代HADOOP®作业的编译器引导软件加速器提供了各种方法。 一种方法包括将迭代HADOOP应用产生的中间数据识别为低于预定阈值大小并且使用小于预定阈值时间段的中间数据。 中间数据存储在存储设备中。 该方法还包括通过在任何给定时间选择性地使用消息传递接口和分布式文件系统中的任何一个作为通信层来最小化中间数据的输入,输出和同步开销。 消息传递接口与HADOOP®分布式文件系统位于同一位置。

    USER-LEVEL MANAGER TO HANDLE MULTI-PROCESSING ON MANY-CORE COPROCESSOR-BASED SYSTEMS
    5.
    发明申请
    USER-LEVEL MANAGER TO HANDLE MULTI-PROCESSING ON MANY-CORE COPROCESSOR-BASED SYSTEMS 审中-公开
    用户级管理员处理多个基于协处理器的系统的多处理

    公开(公告)号:US20140208072A1

    公开(公告)日:2014-07-24

    申请号:US13858034

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A method is disclosed to manage a multi-processor system with one or more multiple-core coprocessors by intercepting coprocessor offload infrastructure application program interface (API) calls; scheduling user processes to run on one of the coprocessors; scheduling offloads within user processes to run on one of the coprocessors; and affinitizing offloads to predetermined cores within one of the coprocessors by selecting and allocating cores to an offload, and obtaining a thread-to-core mapping from a user.

    Abstract translation: 公开了一种通过拦截协处理器卸载基础设施应用程序接口(API)调用来管理具有一个或多个多核协处理器的多处理器系统的方法; 调度用户进程以在一个协处理器上运行; 在用户进程内调度卸载以在一个协处理器上运行; 并且通过选择和分配核到卸载来将卸载到一个协处理器内的预定内核,并从用户获得线程到核心的映射。

    AUTOMATIC PIPELINING FRAMEWORK FOR HETEROGENEOUS PARALLEL COMPUTING SYSTEMS
    6.
    发明申请
    AUTOMATIC PIPELINING FRAMEWORK FOR HETEROGENEOUS PARALLEL COMPUTING SYSTEMS 有权
    用于异构并行计算系统的自动管道框架

    公开(公告)号:US20130298130A1

    公开(公告)日:2013-11-07

    申请号:US13887044

    申请日:2013-05-03

    CPC classification number: G06F9/4887 G06F8/451

    Abstract: Systems and methods for automatic generation of software pipelines for heterogeneous parallel systems (AHP) include pipelining a program with one or more tasks on a parallel computing platform with one or more processing units and partitioning the program into pipeline stages, wherein each pipeline stage contains one or more tasks. The one or more tasks in the pipeline stages are scheduled onto the one or more processing units, and execution times of the one or more tasks in the pipeline stages are estimated. The above steps are repeated until a specified termination criterion is reached.

    Abstract translation: 用于异构并行系统(AHP)的软件管线自动生成的系统和方法包括在具有一个或多个处理单元的并行计算平台上将具有一个或多个任务的程序流水线化,并将程序划分成流水线阶段,其中每个流水线阶段包含一个 或更多的任务。 流水线级中的一个或多个任务被调度到一个或多个处理单元上,并且估计流水线阶段中的一个或多个任务的执行时间。 重复上述步骤直到达到指定的终止标准。

    Automatic pipelining framework for heterogeneous parallel computing systems
    7.
    发明授权
    Automatic pipelining framework for heterogeneous parallel computing systems 有权
    用于异构并行计算系统的自动流水线框架

    公开(公告)号:US09122523B2

    公开(公告)日:2015-09-01

    申请号:US13887044

    申请日:2013-05-03

    CPC classification number: G06F9/4887 G06F8/451

    Abstract: Systems and methods for automatic generation of software pipelines for heterogeneous parallel systems (AHP) include pipelining a program with one or more tasks on a parallel computing platform with one or more processing units and partitioning the program into pipeline stages, wherein each pipeline stage contains one or more tasks. The one or more tasks in the pipeline stages are scheduled onto the one or more processing units, and execution times of the one or more tasks in the pipeline stages are estimated. The above steps are repeated until a specified termination criterion is reached.

    Abstract translation: 用于异构并行系统(AHP)的软件管线自动生成的系统和方法包括在具有一个或多个处理单元的并行计算平台上将具有一个或多个任务的程序流水线化,并将程序划分成流水线阶段,其中每个流水线阶段包含一个 或更多的任务。 流水线级中的一个或多个任务被调度到一个或多个处理单元上,并且估计流水线阶段中的一个或多个任务的执行时间。 重复上述步骤直到达到指定的终止标准。

    Method for simultaneous scheduling of processes and offloading computation on many-core coprocessors
    8.
    发明授权
    Method for simultaneous scheduling of processes and offloading computation on many-core coprocessors 有权
    多核协同处理器同步进程调度和卸载计算方法

    公开(公告)号:US09152467B2

    公开(公告)日:2015-10-06

    申请号:US13858039

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A method is disclosed to manage a multi-processor system with one or more manycore devices, by managing real-time bag-of-tasks applications for a cluster, wherein each task runs on a single server node, and uses the offload programming model, and wherein each task has a deadline and three specific resource requirements: total processing time, a certain number of manycore devices and peak memory on each device; when a new task arrives, querying each node scheduler to determine which node can best accept the task and each node scheduler responds with an estimated completion time and a confidence level, wherein the node schedulers use an urgency-based heuristic to schedule each task and its offloads; responding to an accept/reject query phase, wherein the cluster scheduler send the task requirements to each node and queries if the node can accept the task with an estimated completion time and confidence level; and scheduling tasks and offloads using a aging and urgency-based heuristic, wherein the aging guarantees fairness, and the urgency prioritizes tasks and offloads so that maximal deadlines are met.

    Abstract translation: 公开了一种通过管理用于集群的实时任务应用程序来管理具有一个或多个管理器设备的多处理器系统的方法,其中每个任务在单个服务器节点上运行,并且使用卸载编程模型, 并且其中每个任务具有最后期限和三个特定的资源要求:总处理时间,一定数量的管理设备和每个设备上的峰值存储器; 当新任务到达时,查询每个节点调度器以确定哪个节点可以最好地接受任务,并且每个节点调度器以估计的完成时间和置信水平进行响应,其中节点调度器使用基于紧急度的启发式来安排每个任务及其 卸货 响应于接受/拒绝查询阶段,其中所述群集调度器向每个节点发送所述任务需求,并且查询所述节点是否可以接收具有估计完成时间和置信水平的所述任务; 并使用基于老化和紧急性的启发式调度任务和卸载,其中老化保证公平性,并且紧急性优先考虑任务和卸载,以便满足最大期限。

Patent Agency Ranking