Abstract:
Techniques are described herein for evaluating graph processing tasks using a multi-stage pipelining communication mechanism. In a multi-node system comprising a plurality of nodes, each node of said plurality of nodes executing a respective communication agent object, wherein said respective communication agent object comprises: a sender lambda function is configured to: perform one or more sending operations, generate source messages based on the one or more sender operations, each source message of said source messages being marked for a particular node of said plurality of nodes. An intermediate lambda function is configured to: read source messages marked for said each node and sent to said each node, perform one or more intermediate operations based on the one or more source messages, generate intermediate messages based on the one or more intermediate operations, each intermediate message of said intermediate messages being marked for a particular node of said plurality of nodes. A final receiver lambda function configured to: read intermediate messages marked for said each node and sent to said each node, perform one or more final operations based on the one or more intermediate messages, generate a final result based on the one or more final operations. On each node of said plurality of nodes, the communication agent object is executed, wherein the communication agent object comprises executing said sender lambda function, said intermediate lambda function, and said final receiver lambda function.
Abstract:
Techniques for generating and transferring bulk messages from one computing device to another computing device in a cluster are provided. Each computing device in a cluster is assigned a different set of nodes of a graph. A first computing device may be assigned a particular node that is neighbors with multiple other nodes that are assigned to one or more other computing devices in the cluster. When processing graph-related code at the first computing device, information about the neighbors may be required. The first computing device receives a bulk message from one of the other computing devices. The bulk message contains information about at least a subset of the neighbors. Therefore, the first computing device is not required to send multiple messages for information about the subset of neighbors. In fact, the first computing device is not required to send any message for the information.
Abstract:
Techniques are provided for dynamically self-balancing communication and computation. In an embodiment, each partition of application data is stored on a respective computer of a cluster. The application is divided into distributed jobs, each of which corresponds to a partition. Each distributed job is hosted on the computer that hosts the corresponding data partition. Each computer divides its distributed job into computation tasks. Each computer has a pool of threads that execute the computation tasks. During execution, one computer receives a data access request from another computer. The data access request is executed by a thread of the pool. Threads of the pool are bimodal and may be repurposed between communication and computation, depending on workload. Each computer individually detects completion of its computation tasks. Each computer informs a central computer that its distributed job has finished. The central computer detects when all distributed jobs of the application have terminated.
Abstract:
Techniques herein provide job control and synchronization of distributed graph-processing jobs. In an embodiment, a computer system maintains an input queue of graph processing jobs. In response to de-queuing a graph processing job, a master thread partitions the graph processing job into distributed jobs. Each distributed job has a sequence of processing phases. The master thread sends each distributed job to a distributed processor. Each distributed job executes a first processing phase of its sequence of processing phases. To the master thread, the distributed job announces completion of its first processing phase. The master thread detects that all distributed jobs have announced finishing their first processing phase. The master thread broadcasts a notification to the distributed jobs that indicates that all distributed jobs have finished their first processing phase. Receiving that notification causes the distributed jobs to execute their second processing phase. Queues and barriers provide for faults and cancellation.
Abstract:
Techniques herein minimally communicate between computers to repartition a graph. In embodiments, each computer receives a partition of edges and vertices of the graph. For each of its edges or vertices, each computer stores an intermediate representation into an edge table (ET) or vertex table. Different edges of a vertex may be loaded by different computers, which may cause a conflict. Each computer announces that a vertex resides on the computer to a respective tracking computer. Each tracking computer makes assignments of vertices to computers and publicizes those assignments. Each computer that loaded conflicted vertices transfers those vertices to computers of the respective assignments. Each computer stores a materialized representation of a partition based on: the ET and vertex table of the computer, and the vertices and edges that were transferred to the computer. Edges stored in the materialized representation are stored differently than edges stored in the ET.
Abstract:
Techniques herein minimally communicate between computers to repartition a graph. In embodiments, each computer receives a partition of edges and vertices of the graph. For each of its edges or vertices, each computer stores an intermediate representation into an edge table (ET) or vertex table. Different edges of a vertex may be loaded by different computers, which may cause a conflict. Each computer announces that a vertex resides on the computer to a respective tracking computer. Each tracking computer makes assignments of vertices to computers and publicizes those assignments. Each computer that loaded conflicted vertices transfers those vertices to computers of the respective assignments. Each computer stores a materialized representation of a partition based on: the ET and vertex table of the computer, and the vertices and edges that were transferred to the computer. Edges stored in the materialized representation are stored differently than edges stored in the ET.
Abstract:
Techniques minimize communication while loading a graph. In a distributed embodiment, each computer loads some edges of the graph. Each edge connects a source vertex (SV) to a destination vertex. For each SV of the edges, the computer hashes the SV to detect a tracking computer (TrC) that tracks on which computer does the SV reside. Each computer informs the TrC that the SV originates an edge that resides on that computer. For each SV, the TrC detects that the SV originates edges that reside on multiple providing computers (PCs). The TrC selects a target computer (TaC) from the multiple PCs to host the SV. The TrC instructs each PC, excluding the TaC, to transfer the SV and related edges that are connected to the SV to the TaC. A vertex's internal identifier indicates which computer hosts the vertex. The TrC maintains a mapping between external and internal identifiers.
Abstract:
Techniques herein provide job control and synchronization of distributed graph-processing jobs. In an embodiment, a computer system maintains an input queue of graph processing jobs. In response to de-queuing a graph processing job, a master thread partitions the graph processing job into distributed jobs. Each distributed job has a sequence of processing phases. The master thread sends each distributed job to a distributed processor. Each distributed job executes a first processing phase of its sequence of processing phases. To the master thread, the distributed job announces completion of its first processing phase. The master thread detects that all distributed jobs have announced finishing their first processing phase. The master thread broadcasts a notification to the distributed jobs that indicates that all distributed jobs have finished their first processing phase. Receiving that notification causes the distributed jobs to execute their second processing phase. Queues and barriers provide for faults and cancellation.
Abstract:
Techniques minimize communication while loading a graph. In a distributed embodiment, each computer loads some edges of the graph. Each edge connects a source vertex (SV) to a destination vertex. For each SV of the edges, the computer hashes the SV to detect a tracking computer (TrC) that tracks on which computer does the SV reside. Each computer informs the TrC that the SV originates an edge that resides on that computer. For each SV, the TrC detects that the SV originates edges that reside on multiple providing computers (PCs). The TrC selects a target computer (TaC) from the multiple PCs to host the SV. The TrC instructs each PC, excluding the TaC, to transfer the SV and related edges that are connected to the SV to the TaC. A vertex's internal identifier indicates which computer hosts the vertex. The TrC maintains a mapping between external and internal identifiers.
Abstract:
Techniques herein perform workload-balanced graph partitioning. Each graph partition is distributed to a respective computer. Each computer applies a workload-estimation function to its partition to calculate a numeric workload-value that indicates how much computation the partition needs. Each computer sends its numeric workload-value to a master computer. The master compares the highest and lowest numeric workload-values. If the difference exceeds a threshold, the master detects how much work should overloaded-computers offload to under-utilized computers. To each overloaded-computer, the master sends a directive with a balancing numeric workload-value that indicates how much computation to offload and an identifier of an under-utilized computer to receive the offload. Based on this directive and the workload-estimation function, an overloaded-computer selects a portion of its partition that corresponds to the balancing numeric workload-value, removes that portion from its partition, and transfers the portion to the under-utilized computer, which adds the portion to its partition.