Controller driven reconfiguration of a multi-layered application or service model

    公开(公告)号:US10516568B2

    公开(公告)日:2019-12-24

    申请号:US14841659

    申请日:2015-08-31

    申请人: Nicira, Inc.

    摘要: Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

    INLINE LOAD BALANCING
    2.
    发明申请

    公开(公告)号:US20190288947A1

    公开(公告)日:2019-09-19

    申请号:US16427294

    申请日:2019-05-30

    申请人: Nicira, Inc.

    摘要: Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.

    Distributed load balancing systems

    公开(公告)号:US10135737B2

    公开(公告)日:2018-11-20

    申请号:US14557290

    申请日:2014-12-01

    申请人: Nicira, Inc.

    摘要: Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.

    Sticky Service Sessions in a Datacenter
    4.
    发明申请
    Sticky Service Sessions in a Datacenter 审中-公开
    数据中心中的粘性服务会话

    公开(公告)号:US20160094661A1

    公开(公告)日:2016-03-31

    申请号:US14841654

    申请日:2015-08-31

    申请人: Nicira, Inc.

    IPC分类号: H04L29/08 H04L29/06

    摘要: Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

    摘要翻译: 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。

    Inline Service Switch
    5.
    发明申请
    Inline Service Switch 审中-公开
    内联服务开关

    公开(公告)号:US20160094632A1

    公开(公告)日:2016-03-31

    申请号:US14841647

    申请日:2015-08-31

    申请人: Nicira, Inc.

    摘要: Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

    摘要翻译: 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。

    Controller Driven Reconfiguration of a Multi-Layered Application or Service Model
    6.
    发明申请
    Controller Driven Reconfiguration of a Multi-Layered Application or Service Model 审中-公开
    多层应用程序或服务模型的控制器驱动重新配置

    公开(公告)号:US20160094384A1

    公开(公告)日:2016-03-31

    申请号:US14841659

    申请日:2015-08-31

    申请人: Nicira, Inc.

    IPC分类号: H04L12/24 H04L29/06

    摘要: Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

    摘要翻译: 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。

    DYNAMICALLY ADJUSTING A DATA COMPUTE NODE GROUP
    9.
    发明申请
    DYNAMICALLY ADJUSTING A DATA COMPUTE NODE GROUP 审中-公开
    动态调整数据计算机节点组

    公开(公告)号:US20160094631A1

    公开(公告)日:2016-03-31

    申请号:US14815838

    申请日:2015-07-31

    申请人: Nicira, Inc.

    IPC分类号: H04L29/08

    摘要: Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.

    摘要翻译: 一些实施例提供了用于负载平衡由源计算节点(SCN)发送到一个或多个不同目的地计算节点(DCN)组的数据消息的新颖方法。 在一些实施例中,该方法在源计算节点的出口数据路径中部署负载均衡器。 该负载平衡器接收从源计算节点发送的每个数据消息,并且确定数据消息是否寻址到负载均衡器扩展数据流量以平衡负载的DCN组之一(例如,指向的数据流量) 组中的DCN。 当接收到的数据消息未被寻址到一个负载平衡DCN组时,负载平衡器将接收的数据消息转发到其寻址的目的地。 另一方面,当接收到的数据消息被寻址到负载平衡器的DCN组之一时,负载均衡器识别应该接收数据消息的寻址的DCN组中的DCN,并将数据消息引导到所识别的DCN。 为了将数据消息引导到所识别的DCN,在一些实施例中,负载平衡器从所识别的DCN组的地址改变数据消息中的目的地地址(例如,目的地IP地址,目的地端口,目的地MAC地址等) 到所识别的DCN的地址(例如,目的地IP地址)。