MEMORY-TRACKING RESOURCE MANAGER FOR ELASTIC DISTRIBUTED GRAPH-PROCESSING SYSTEM

    公开(公告)号:US20250094224A1

    公开(公告)日:2025-03-20

    申请号:US18369254

    申请日:2023-09-18

    Abstract: A resource manager tracks the amount of available memory for a cluster of machines and for each machine in the cluster. The resource manager receives a reservation request from a job for a graph processing operation. The reservation request specifies an identification of the job, a type of reservation, and an amount of memory requested. The resource manager determines whether to grant the reservation request based on the type of reservation, the amount of memory requested, and the amount of available memory in the cluster or in one or more machines in the cluster. In response to determining to grant the reservation request, the resource manager sends a response to the job indicating an amount of memory reserved and adjusts the amount of available cluster memory and the amount of available machine memory for at least one machine in the cluster based on the amount of memory reserved.

    Multi-stage pipelining for distributed graph processing

    公开(公告)号:US11363093B2

    公开(公告)日:2022-06-14

    申请号:US15968637

    申请日:2018-05-01

    Abstract: Techniques are described herein for evaluating graph processing tasks using a multi-stage pipelining communication mechanism. In a multi-node system comprising a plurality of nodes, each node of said plurality of nodes executes a respective communication agent object. The respective communication agent object comprises: a sender lambda function is configured to perform sending operations and generate source messages based on the sender operations. An intermediate lambda function is configured to read source messages marked for a node, perform intermediate operations based on the source messages and generate intermediate messages based on the intermediate operations. A final receiver lambda function configured to: read intermediate messages marked for said each node, perform final operations based on the intermediate messages and generate a final result based on the final operations.

    Optimizing graph queries by performing early pruning

    公开(公告)号:US11250059B2

    公开(公告)日:2022-02-15

    申请号:US16738972

    申请日:2020-01-09

    Abstract: Techniques are described herein for early pruning of potential graph query results. Specifically, based on determining that property values of a path through graph data cannot affect results of a query, the path is pruned from a set of potential query solutions prior to fully exploring the path. Early solution pruning is performed on prunable queries that project prunable functions including MIN, MAX, SUM, and DISTINCT, the results of which are not tied to a number of paths explored for query execution. A database system implements early solution pruning for a prunable query based on intermediate results maintained for the query during query execution. Specifically, when a system determines that property values of a given potential solution path cannot affect the query results reflected in intermediate results maintained for the query, the path is discarded from the set of possible query solutions without further exploration of the path.

    Fast distributed graph query engine

    公开(公告)号:US10990595B2

    公开(公告)日:2021-04-27

    申请号:US16274210

    申请日:2019-02-12

    Abstract: Techniques are described herein for asynchronous execution of queries on statically replicated graph data. In an embodiment, a graph is partitioned among a plurality of computers executing the graph querying engine. One or more high-degree vertices of the graph are each replicated in each graph partition. The partitions, including the replicated high-degree vertices, are loaded in memory of the plurality of computers. To execute a query, a query plan is generated based on the query. The query plan specifies a plurality of operators and an order for the plurality of operators. The order is such that if an operator requires data generated by another operator, then the other operator is ordered before the operator in the query plan. Replicated copies of a vertex is visited if matches made by subsequent operator(s) are limited by data unique to the replicated vertices.

    FAST DISTRIBUTED GRAPH QUERY ENGINE
    9.
    发明申请

    公开(公告)号:US20190354526A1

    公开(公告)日:2019-11-21

    申请号:US16274210

    申请日:2019-02-12

    Abstract: Techniques are described herein for asynchronous execution of queries on statically replicated graph data. In an embodiment, a graph is partitioned among a plurality of computers executing the graph querying engine. One or more high-degree vertices of the graph are each replicated in each graph partition. The partitions, including the replicated high-degree vertices, are loaded in memory of the plurality of computers. To execute a query, a query plan is generated based on the query. The query plan specifies a plurality of operators and an order for the plurality of operators. The order is such that if an operator requires data generated by another operator, then the other operator is ordered before the operator in the query plan. Replicated copies of a vertex is visited if matches made by subsequent operator(s) are limited by data unique to the replicated vertices.

    MULTI-STAGE PIPELINING FOR DISTRIBUTED GRAPH PROCESSING

    公开(公告)号:US20190342372A1

    公开(公告)日:2019-11-07

    申请号:US15968637

    申请日:2018-05-01

    Abstract: Techniques are described herein for evaluating graph processing tasks using a multi-stage pipelining communication mechanism. In a multi-node system comprising a plurality of nodes, each node of said plurality of nodes executing a respective communication agent object, wherein said respective communication agent object comprises: a sender lambda function is configured to: perform one or more sending operations, generate source messages based on the one or more sender operations, each source message of said source messages being marked for a particular node of said plurality of nodes. An intermediate lambda function is configured to: read source messages marked for said each node and sent to said each node, perform one or more intermediate operations based on the one or more source messages, generate intermediate messages based on the one or more intermediate operations, each intermediate message of said intermediate messages being marked for a particular node of said plurality of nodes. A final receiver lambda function configured to: read intermediate messages marked for said each node and sent to said each node, perform one or more final operations based on the one or more intermediate messages, generate a final result based on the one or more final operations. On each node of said plurality of nodes, the communication agent object is executed, wherein the communication agent object comprises executing said sender lambda function, said intermediate lambda function, and said final receiver lambda function.

Patent Agency Ranking