Max-flow/min-cut solution algorithm for early terminating push-relabel algorithm

    公开(公告)号:US12223691B2

    公开(公告)日:2025-02-11

    申请号:US17798898

    申请日:2021-09-22

    Abstract: A max-flow/min-cut solution algorithm for early terminating a push-relabel algorithm is provided. The max-flow/min-cut solution algorithm is used for an application that does not require an exact maximum flow, and includes: defining an early termination condition of the push-relabel algorithm by a separation condition and a stable condition; determining that the separation condition is satisfied if there is no source node s, s∈S, in the set T at any time in an operation process of the push-relabel algorithm; determining that the stable condition is satisfied if there is no active node in the set T; and terminating the push-relabel algorithm if both the separation condition and the stability condition are satisfied. The early termination technique is proposed to greatly reduce redundant computations and ensure that the algorithm terminates correctly in all cases.

    Ripple push method for graph cut
    2.
    发明授权

    公开(公告)号:US11934459B2

    公开(公告)日:2024-03-19

    申请号:US17799278

    申请日:2021-09-22

    CPC classification number: G06F16/9024 G06T7/13 G06T7/162 G06T2207/20072

    Abstract: A ripple push method for a graph cut includes: obtaining an excess flow ef(v) of a current node v; traversing four edges connecting the current node v in top, bottom, left and right directions, and determining whether each of the four edges is a pushable edge; calculating, according to different weight functions, a maximum push value of each of the four edges by efw=ef(v)*W, where W denotes a weight function; and traversing the four edges, recording a pushable flow of each of the four edges, and pushing out a calculated flow. The ripple push method explores different push weight functions, and significantly improves the actual parallelism of the push-relabel algorithm.

    Method for Disseminating Scaling Information and Application Thereof in VLSI Implementation of Fixed-Point FFT

    公开(公告)号:US20230179315A1

    公开(公告)日:2023-06-08

    申请号:US18049932

    申请日:2022-10-26

    CPC classification number: H04J11/00

    Abstract: Example embodiments relate to methods for disseminating scaling information and applications thereof in very large scale integration (VLSI) implementations of fixed-point fast Fourier transforms (FFTs). One embodiment includes a method for disseminating scaling information in a system. The system includes a linear decomposable transformation process and an inverse process of the linear decomposable transformation process. The inverse process of the linear decomposable transformation process is defined, in time or space, as an inverse linear decomposable transformation process. The linear decomposable transformation process is separated from the inverse linear decomposable transformation process. The linear decomposable transformation process or the inverse linear decomposable transformation process is able to be performed first and is defined as a linear decomposable transformation I. The other remaining process is performed subsequently and is defined as a linear decomposable transformation II. The method for disseminating scaling information is used for a bit width-optimized and energy-saving hardware implementation.

    Efficient parallel computing method for box filter

    公开(公告)号:US11094071B1

    公开(公告)日:2021-08-17

    申请号:US17054169

    申请日:2020-06-17

    Abstract: An efficient parallel computing method for a box filter, includes: step 1, with respect to a given degree of parallelism N and a radius r of the filter kernel, establishing a first architecture provided without an extra register and a second architecture provided with the extra register; step 2, building a first adder tree for the first architecture and a second adder tree for the second architecture, respectively; step 3, searching the first adder tree and the second adder tree from top to bottom, calculating the pixel average corresponding to each filter kernel by using the first adder tree and the second adder tree, respectively, and counting resources required to be consumed by the first architecture and the second architecture, respectively; and, step 4, selecting one architecture consuming a relatively small resources from the first architecture and the second architecture for computing the box filter.

Patent Agency Ranking