KNAPSACK-BASED SHARING-AWARE SCHEDULER FOR COPROCESSOR-BASED COMPUTE CLUSTERS
    21.
    发明申请
    KNAPSACK-BASED SHARING-AWARE SCHEDULER FOR COPROCESSOR-BASED COMPUTE CLUSTERS 审中-公开
    用于基于共存器的计算机组的基于KNAPSACK的共享调度器

    公开(公告)号:US20150113542A1

    公开(公告)日:2015-04-23

    申请号:US14506256

    申请日:2014-10-03

    CPC classification number: G06F9/5066 H04L67/1023

    Abstract: A method is provided for controlling a compute cluster having a plurality of nodes. Each of the plurality of nodes has a respective computing device with a main server and one or more coprocessor-based hardware accelerators. The method includes receiving a plurality of jobs for scheduling. The method further includes scheduling the plurality of jobs across the plurality of nodes responsive to a knapsack-based sharing-aware schedule generated by a knapsack-based sharing-aware scheduler. The knapsack-based sharing-aware schedule is generated to co-locate together on a same computing device certain ones of the plurality of jobs that are mutually compatible based on a set of requirements whose fulfillment is determined using a knapsack-based sharing-aware technique that uses memory as a knapsack capacity and minimizes makespan while adhering to coprocessor memory and thread resource constraints.

    Abstract translation: 提供了一种用于控制具有多个节点的计算集群的方法。 多个节点中的每一个具有相应的计算设备,其具有主服务器和一个或多个基于协处理器的硬件加速器。 该方法包括接收多个作业用于调度。 该方法还包括响应于由基于背包的共享感知调度器生成的基于背包的共享感知调度来跨多个节点调度多个作业。 基于背包的共享感知计划被生成以在同一计算设备上共同定位在基于使用基于背包的共享感知技术来确定其满足的一组需求的相互兼容的多个作业中的某些作业 其使用内存作为背包容量,并且在保持协处理器存储器和线程资源约束的同时最小化制造时间。

    METHODS OF PROCESSING CORE SELECTION FOR APPLICATIONS ON MANYCORE PROCESSORS
    22.
    发明申请
    METHODS OF PROCESSING CORE SELECTION FOR APPLICATIONS ON MANYCORE PROCESSORS 有权
    处理多核处理器应用的核心选择方法

    公开(公告)号:US20140208331A1

    公开(公告)日:2014-07-24

    申请号:US13858036

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A runtime method is disclosed that dynamically sets up core containers and thread-to-core affinity for processes running on manycore coprocessors. The method is completely transparent to user applications and incurs low runtime overhead. The method is implemented within a user-space middleware that also performs scheduling and resource management for both offload and native applications using the manycore coprocessors.

    Abstract translation: 公开了一种运行时方法,其动态地为在manycore协处理器上运行的进程设置核心容器和线程到核心的亲和力。 该方法对用户应用程序是完全透明的,并导致低运行时开销。 该方法在用户空间中间件中实现,该中间件还使用manycore协处理器对卸载和本机应用程序执行调度和资源管理。

    METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS
    23.
    发明申请
    METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS 有权
    方法同时安排过程和卸载计算在多个核心协处理器

    公开(公告)号:US20140208327A1

    公开(公告)日:2014-07-24

    申请号:US13858039

    申请日:2013-04-06

    CPC classification number: G06F9/5044 G06F9/5033 G06F2209/509

    Abstract: A method is disclosed to manage a multi-processor system with one or more manycore devices, by managing real-time bag-of-tasks applications for a cluster, wherein each task runs on a single server node, and uses the offload programming model, and wherein each task has a deadline and three specific resource requirements: total processing time, a certain number of manycore devices and peak memory on each device; when a new task arrives, querying each node scheduler to determine which node can best accept the task and each node scheduler responds with an estimated completion time and a confidence level, wherein the node schedulers use an urgency-based heuristic to schedule each task and its offloads; responding to an accept/reject query phase, wherein the cluster scheduler send the task requirements to each node and queries if the node can accept the task with an estimated completion time and confidence level; and scheduling tasks and offloads using a aging and urgency-based heuristic, wherein the aging guarantees fairness, and the urgency prioritizes tasks and offloads so that maximal deadlines are met.

    Abstract translation: 公开了一种通过管理用于集群的实时任务应用程序来管理具有一个或多个管理器设备的多处理器系统的方法,其中每个任务在单个服务器节点上运行,并且使用卸载编程模型, 并且其中每个任务具有最后期限和三个特定的资源要求:总处理时间,一定数量的管理设备和每个设备上的峰值存储器; 当新任务到达时,查询每个节点调度器以确定哪个节点可以最好地接受任务,并且每个节点调度器以估计的完成时间和置信水平进行响应,其中节点调度器使用基于紧急度的启发式来安排每个任务及其 卸货 响应于接受/拒绝查询阶段,其中所述群集调度器向每个节点发送所述任务需求,并且查询所述节点是否可以接收具有估计完成时间和置信水平的所述任务; 并使用基于老化和紧急性的启发式调度任务和卸载,其中老化保证公平性,并且紧急性优先考虑任务和卸载,以便满足最大期限。

    System for application self-optimization in serverless edge computing environments

    公开(公告)号:US11847510B2

    公开(公告)日:2023-12-19

    申请号:US17964170

    申请日:2022-10-12

    CPC classification number: G06F9/543 G06F9/505

    Abstract: A method for implementing application self-optimization in serverless edge computing environments is presented. The method includes requesting deployment of an application pipeline on data received from a plurality of sensors, the application pipeline including a plurality of microservices, enabling communication between a plurality of pods and a plurality of analytics units (AUs), each pod of the plurality of pods including a sidecar, determining whether each of the plurality of AUs maintains any state to differentiate between stateful AUs and stateless AUs, scaling the stateful AUs and the stateless AUs, enabling communication directly between the sidecars of the plurality of pods, and reusing and resharing common AUs of the plurality of AUs across different applications.

    Eco: edge-cloud optimization of 5G applications

    公开(公告)号:US11418618B2

    公开(公告)日:2022-08-16

    申请号:US17515875

    申请日:2021-11-01

    Abstract: A method for optimal placement of microservices of a micro-services-based application in a multi-tiered computing network environment employing 5G technology is presented. The method includes accessing a centralized server or cloud to request a set of services to be deployed on a plurality of sensors associated with a plurality of devices, the set of services including launching an application on a device of the plurality of devices, modeling the application as a directed graph with vertices being microservices and edges representing communication between the microservices, assigning each of the vertices of the directed graph with two cost weights, employing an edge monitor (EM), an edge scheduler (ES), an alerts-manager at edge (AM-E), and a file transfer (FT) at the edge to handle partitioning of the microservices, and dynamically mapping the microservices to the edge or the cloud to satisfy application-specific response times.

    Specification and execution of real-time streaming applications

    公开(公告)号:US11169785B2

    公开(公告)日:2021-11-09

    申请号:US16812792

    申请日:2020-03-09

    Abstract: Systems and methods to specify and execute real-time streaming applications are provided. The method includes specifying an application topology for an application including spouts, bolts, connections, a global hash table, and a topology manager. Each spout receives input data and each bolt transforms the input data, the global hash table allows in memory communication between each spout and bolt to others of the spouts and the bolts. The topology manager manages the application topology. The method includes compiling the application into a shared or static library for applications, and exporting a special symbol associated with the application. The runtime system can be used to retrieve the application topology from the shared or static library based on the special symbol and execute the application topology on a single node or distribute across multiple nodes.

Patent Agency Ranking