Method and system for managing network-to-network interconnection

    公开(公告)号:US11575540B2

    公开(公告)日:2023-02-07

    申请号:US17671265

    申请日:2022-02-14

    Abstract: This disclosure describes methods and systems to externally manage network-to-network interconnect configuration data in conjunction with a centralized database subsystem. An example of the methods includes receiving and storing, in the centralized database subsystem, data indicative of user intent to interconnect at least a first network and a second network. The example method further includes, based at least in part on the data indicative of user intent, determining and storing, in the centralized database subsystem, a network intent that corresponds to the user intent. The example method further includes providing data indicative of the network intent from the centralized database subsystem to a first data plane adaptor, associated with the first network, and a second data plane adaptor, associated with the second network.

    HIGHLY-AVAILABLE DISTRIBUTED NETWORK ADDRESS TRANSLATION (NAT) ARCHITECTURE WITH FAILOVER SOLUTIONS

    公开(公告)号:US20210103507A1

    公开(公告)日:2021-04-08

    申请号:US16592613

    申请日:2019-10-03

    Abstract: This disclosure describes techniques for providing a distributed scalable architecture for Network Address Translation (NAT) systems with high availability and mitigations for flow breakage during failover events. The NAT servers may include functionality to serve as fast-path servers and/or slow-path servers. A fast-path server may include a NAT worker that includes a cache of NAT mappings to perform stateful network address translation and to forward packets with minimal latency. A slow-path server may include a mapping server that creates new NAT mappings, depreciates old ones, and answers NAT worker state requests. The NAT system may use virtual mapping servers (VMSs) running on primary physical servers with state duplicated VMSs on different physical failover servers. Additionally, the NAT servers may implement failover solutions for dynamically allocated routeable address/port pairs assigned to new sessions by assigning new outbound address/port pairs when a session starts and broadcasting pairing information.

    REACTIVE APPROACH TO RESOURCE ALLOCATION FOR MICRO-SERVICES BASED INFRASTRUCTURE

    公开(公告)号:US20200328977A1

    公开(公告)日:2020-10-15

    申请号:US16380401

    申请日:2019-04-10

    Abstract: Systems, methods, and computer-readable media are provided for predictive content pre-fetching and allocation of resources for providing network service access. In some examples, traffic in a network environment is monitored and a related network service to a requested network service is recognized. A UDP probe for the related network service is sent to at least one candidate server of a plurality of candidate servers within the network environment. A candidate server of the plurality of candidate servers is selected for provisioning of the related network service. The candidate server gathers one or more pre-fetched resources for provisioning the related network service. Accordingly, traffic associated with provisioning of the related network service can be steered to the candidate server by a load balancer for provisioning of the related network service using the one or more pre-fetched resources.

    EFFICIENT AND FLEXIBLE LOAD-BALANCING FOR CLUSTERS OF CACHES UNDER LATENCY CONSTRAINT

    公开(公告)号:US20200244758A1

    公开(公告)日:2020-07-30

    申请号:US16261462

    申请日:2019-01-29

    Abstract: The present technology provides a system, method and computer readable medium for steering a content request among plurality of cache servers based on multi-level assessment of content popularity. In some embodiments a three levels of popularity may be determined comprising popular, semi-popular and unpopular designations for the queried content. The processing of the query and delivery of the requested content depends on the aforementioned popularity level designation and comprises a acceptance of the query at the edge cache server to which the query was originally directed, rejection of the query and re-direction to a second edge cache server or redirection of the query to origin server to thereby deliver the requested content. The proposed technology results in higher hit ratio for edge cache clusters by steering requests for semi-popular content to one or more additional cache servers while forwarding request for unpopular content to origin server.

Patent Agency Ranking