Systems and methods for providing client-side accelerated access to remote applications via TCP pooling
    61.
    发明授权
    Systems and methods for providing client-side accelerated access to remote applications via TCP pooling 有权
    通过TCP池提供客户端加速访问远程应用程序的系统和方法

    公开(公告)号:US08700695B2

    公开(公告)日:2014-04-15

    申请号:US11324138

    申请日:2005-12-30

    CPC分类号: H04L69/16 H04L69/163

    摘要: The present invention is directed towards systems and methods for dynamically deploying and executing acceleration functionality on a client to improve the performance and delivery of remotely accessed applications. In one embodiment, the client-side acceleration functionality is provided by an acceleration program that performs a transport layer connection pooling technique for improving performance of communications and delivery of a remotely-accessed application. The acceleration program establishes a transport layer connection from the client to the server that can be used by multiple applications on the client, or that is otherwise shared among applications of the client. The acceleration program maintains the transport layer connection open to reduce the number of transport layer connection requests and number of transport layer connections established with the server for an application or multiple applications running on the client.

    摘要翻译: 本发明涉及用于在客户端上动态部署和执行加速功能以提高远程访问应用的性能和传送的系统和方法。 在一个实施例中,客户端加速功能由执行用于改善远程访问应用的通信和传送的性能的传输层连接池技术的加速程序提供。 加速程序建立从客户端到服务器的传输层连接,可以由客户端上的多个应用程序使用,也可以在客户端应用程序之间共享。 加速程序维护传输层连接打开以减少传输层连接请求的数量和与服务器建立的应用程序或在客户端上运行的多个应用程序的传输层连接数。

    APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR GUARANTEED CONTENT DELIVERY INCORPORATING PUTTING A CLIENT ON-HOLD BASED ON RESPONSE TIME
    62.
    发明申请
    APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR GUARANTEED CONTENT DELIVERY INCORPORATING PUTTING A CLIENT ON-HOLD BASED ON RESPONSE TIME 有权
    用于保证内容交付的装置,方法和计算机程序产品,根据响应时间进行客户端持续

    公开(公告)号:US20110060840A1

    公开(公告)日:2011-03-10

    申请号:US12880645

    申请日:2010-09-13

    IPC分类号: G06F15/16

    摘要: An apparatus, method and computer program product for guaranteeing network client-server response time while providing a way of putting the client on-hold when the response time temporarily prohibits access to the requested server. The apparatus is implemented within an interface unit connecting a plurality of servers and an on-hold server to the Internet, which is connected to a plurality of clients. According to one aspect of the invention, the method includes the steps of opening a connection between a client and the interface unit; determining which server the client desires a page from; determining the current response time of the requested server; if the response time is acceptable then opening a connection between the interface unit and the requested server if no free connection is open between the interface unit and the requested server; allowing the client to access information on the requested server-via the connections; and closing the connection between the client and the interface unit while keeping open the connection between the interface unit and the requested server. Alternatively, if the response time is not acceptable, then putting the client on-hold by redirecting the client to an on-hold server until the response time of the requested server becomes acceptable. According to a “on-hold distribution” aspect of the invention, the interface unit determines the on-hold preference of the client and selects the server hosting that on-hold preference. According to another aspect of the invention, instead of utilizing the interface unit, each server has the intelligence to put the client on-hold when applicable.

    摘要翻译: 一种用于保证网络客户机 - 服务器响应时间的设备,方法和计算机程序产品,同时提供当响应时间临时禁止访问所请求的服务器时将客户端保持的方式。 该装置在连接到多个客户端的多个服务器和保持服务器连接到因特网的接口单元内实现。 根据本发明的一个方面,该方法包括以下步骤:打开客户端与接口单元之间的连接; 确定客户端期望一个页面的服务器; 确定所请求的服务器的当前响应时间; 如果响应时间是可接受的,则如果在接口单元和所请求的服务器之间没有空闲连接打开,则在接口单元和所请求的服务器之间打开连接; 允许客户端通过连接访问请求的服务器上的信息; 并且在保持打开接口单元和所请求的服务器之间的连接的同时关闭客户端和接口单元之间的连接。 或者,如果响应时间不可接受,则通过将客户端重定向到保持服务器,直到所请求的服务器的响应时间变得可接受来使客户端保持。 根据本发明的“保留分配”方面,接口单元确定客户端的保持偏好,并选择托管该持续偏好的服务器。 根据本发明的另一方面,代替使用接口单元,每个服务器具有在适用时使客户端保持的智能。

    Systems and methods for automatic installation and execution of a client-side acceleration program
    63.
    发明授权
    Systems and methods for automatic installation and execution of a client-side acceleration program 有权
    自动安装和执行客户端加速程序的系统和方法

    公开(公告)号:US07810089B2

    公开(公告)日:2010-10-05

    申请号:US11324203

    申请日:2005-12-30

    IPC分类号: G06F9/445

    CPC分类号: G06F8/61 H04L67/02 H04L67/34

    摘要: The present invention is directed towards systems and methods for dynamically deploying and executing an acceleration program on a client to improve the performance and delivery of remotely accessed applications. The acceleration program of the present invention is automatically installed and executed on a client in a manner transparent to and seamless with the operation of the client. An appliance may intercept a request of the client to establish a communication session or connection with a server, and transmit the acceleration program to the client. In some cases, the appliance determines whether the application being accessed by the client can be accelerated and only provides the acceleration program if the application can be accelerated. Upon receipt of the acceleration program, the client automatically performs a silent installation of the acceleration program and executes the acceleration program upon completion of the installation.

    摘要翻译: 本发明涉及用于在客户机上动态部署和执行加速程序以提高远程访问应用的性能和传送的系统和方法。 本发明的加速程序以对客户端的操作透明和无缝的方式在客户端上自动安装和执行。 设备可以拦截客户端建立与服务器的通信会话或连接的请求,并将加速程序发送给客户端。 在某些情况下,设备可以确定客户端正在访问的应用程序是否可以加速,如果可以加速应用程序,则仅提供加速程序。 在接收到加速程序之后,客户端自动执行加速程序的无声安装,并在安装完成后执行加速程序。

    Systems and methods for nTier cache redirection
    64.
    发明授权
    Systems and methods for nTier cache redirection 有权
    nTier缓存重定向的系统和方法

    公开(公告)号:US08996614B2

    公开(公告)日:2015-03-31

    申请号:US13369151

    申请日:2012-02-08

    IPC分类号: G06F15/16 H04L29/06

    CPC分类号: H04L65/4076

    摘要: The present disclosure describes systems and methods for load balancing multiple application delivery controllers (ADCs) in multiple tiers. An upper layer of the tier comprises ADCs that load balance the plurality of ADCs of a lower layer of the tier. In order to appropriately share and maintain client IPs for transparent cache redirection scenarios, the transport layer (Transport Control Protocol (TCP)) port range is split among the ADCs of the lower tier. The lower tier ADCs would then create a connection only using a source port assigned to them. The response from the origin will then be sent to the upper level ADC which looks at the destination port and forward the packet to the correct lower tier ADC. Hence, the ADCs at two levels will work in conjunction to provide transparent cache direction.

    摘要翻译: 本公开描述了用于在多层中负载平衡多个应用传递控制器(ADC)的系统和方法。 层的上层包括负载平衡层的较低层的多个ADC的ADC。 为了适当地共享和维护用于透明缓存重定向方案的客户端IP,传输层(传输控制协议(TCP))端口范围在下层的ADC之间分配。 然后,较低层的ADC将仅使用分配给它们的源端口创建连接。 来自原点的响应将被发送到上层ADC,其查看目标端口并将数据包转发到正确的下层ADC。 因此,两级ADC将协同工作,提供透明缓存方向。

    Apparatus, method and computer program product for efficiently pooling connections between clients and servers
    65.
    发明授权
    Apparatus, method and computer program product for efficiently pooling connections between clients and servers 有权
    用于有效集中客户端和服务器之间的连接的装置,方法和计算机程序产品

    公开(公告)号:US07801978B1

    公开(公告)日:2010-09-21

    申请号:US09690437

    申请日:2000-10-18

    IPC分类号: G06F15/173

    摘要: An apparatus, method and computer program product for efficiently pooling network client-server connections. The apparatus is implemented within an interface unit connecting a plurality of servers to the Internet, which is in turn connected to a plurality of clients. The method includes the steps of opening a connection between a first client and the interface unit; determining whether a connection between the interface unit and a server is finished being utilized by the first client; opening a connection between a second client and the interface unit; if no free connection is open between the interface unit and the server, then allowing the second client to access information on the server via the same connection utilized by the first client without waiting for the first client to initiate closing the connection; and delinking the connections between the first and second clients and the interface unit while keeping open the connection between the interface unit and the server.

    摘要翻译: 一种用于有效集中网络客户端 - 服务器连接的设备,方法和计算机程序产品。 该装置在将多个服务器连接到因特网的接口单元内实现,因特网又连接到多个客户端。 该方法包括打开第一客户端和接口单元之间的连接的步骤; 确定所述接口单元和服务器之间的连接是否被所述第一客户端利用; 打开第二客户端和接口单元之间的连接; 如果接口单元和服务器之间没有空闲连接,则允许第二客户端通过第一客户端使用的相同连接来访问服务器上的信息,而不等待第一客户端开始关闭连接; 并且在保持打开接口单元和服务器之间的连接的同时脱离第一和第二客户端和接口单元之间的连接。

    Apparatus, method and computer program product for efficiently pooling connections between clients and servers
    66.
    发明授权
    Apparatus, method and computer program product for efficiently pooling connections between clients and servers 有权
    用于有效集中客户端和服务器之间的连接的装置,方法和计算机程序产品

    公开(公告)号:US08631120B2

    公开(公告)日:2014-01-14

    申请号:US12855260

    申请日:2010-08-12

    IPC分类号: G06F15/173

    摘要: An apparatus, method and computer program product for efficiently pooling network client-server connections. The apparatus is implemented within an interface unit connecting a plurality of servers to the Internet, which is in turn connected to a plurality of clients. The method includes the steps of opening a connection between a first client and the interface unit; determining whether a connection between the interface unit and a server is finished being utilized by the first client; opening a connection between a second client and the interface unit; if no free connection is open between the interface unit and the server, then allowing the second client to access information on the server via the same connection utilized by the first client without waiting for the first client to initiate closing the connection; and delinking the connections between the first and second clients and the interface unit while keeping open the connection between the interface unit and the server.

    摘要翻译: 一种用于有效集中网络客户端 - 服务器连接的设备,方法和计算机程序产品。 该装置在将多个服务器连接到因特网的接口单元内实现,因特网又连接到多个客户端。 该方法包括打开第一客户端和接口单元之间的连接的步骤; 确定所述接口单元和服务器之间的连接是否被所述第一客户端利用; 打开第二客户端和接口单元之间的连接; 如果在接口单元和服务器之间没有空闲连接,则允许第二客户端通过第一客户端使用的相同连接来访问服务器上的信息,而不等待第一客户端启动关闭连接; 并且在保持打开接口单元和服务器之间的连接的同时脱离第一和第二客户端和接口单元之间的连接。

    SYSTEMS AND METHODS FOR NTIER CACHE REDIRECTION
    67.
    发明申请
    SYSTEMS AND METHODS FOR NTIER CACHE REDIRECTION 有权
    NTIER CACHE REDIRECTION的系统和方法

    公开(公告)号:US20120203825A1

    公开(公告)日:2012-08-09

    申请号:US13369151

    申请日:2012-02-08

    IPC分类号: G06F15/16

    CPC分类号: H04L65/4076

    摘要: The present disclosure describes systems and methods for load balancing multiple application delivery controllers (ADCs) in multiple tiers. An upper layer of the tier comprises ADCs that load balance the plurality of ADCs of a lower layer of the tier. In order to appropriately share and maintain client IPs for transparent cache redirection scenarios, the transport layer (Transport Control Protocol (TCP)) port range is split among the ADCs of the lower tier. The lower tier ADCs would then create a connection only using a source port assigned to them. The response from the origin will then be sent to the upper level ADC which looks at the destination port and forward the packet to the correct lower tier ADC. Hence, the ADCs at two levels will work in conjunction to provide transparent cache direction.

    摘要翻译: 本公开描述了用于在多层中负载平衡多个应用传递控制器(ADC)的系统和方法。 层的上层包括负载平衡层的较低层的多个ADC的ADC。 为了适当地共享和维护用于透明缓存重定向方案的客户端IP,传输层(传输控制协议(TCP))端口范围在下层的ADC之间分配。 然后,较低层的ADC将仅使用分配给它们的源端口创建连接。 来自原点的响应将被发送到上层ADC,其查看目标端口并将数据包转发到正确的下层ADC。 因此,两级ADC将协同工作,提供透明缓存方向。

    Systems and methods for providing a multi-core architecture for an acceleration appliance
    68.
    发明授权
    Systems and methods for providing a multi-core architecture for an acceleration appliance 有权
    为加速设备提供多核架构的系统和方法

    公开(公告)号:US08503459B2

    公开(公告)日:2013-08-06

    申请号:US12766324

    申请日:2010-04-23

    IPC分类号: H04L12/56

    摘要: The present solution is related to a method for distributing flows of network traffic across a plurality of packet processing engines executing on a corresponding core of a multi-core device. The method includes receiving, by a multi-core device intermediary to clients and servers, a packet of a first flow of network traffic between a client and server. The method also includes assigning, by a flow distributor of the multi-core device, the first flow of network traffic to a first core executing a packet processing engine and distributing the packet to this core. The flow distributor may distribute packets of another or second flow of traffic between another client and server to a second core executing a second packet processing engine. When a packet for the flow of traffic assigned to the first core is received, such as a third packet, the flow distributor distributes this packet to the first core.

    摘要翻译: 本解决方案涉及在多核设备的相应核上执行的多个分组处理引擎上分配网络流量流的方法。 所述方法包括:通过多核设备中介向客户端和服务器接收在客户端和服务器之间的第一流网络流量的分组。 该方法还包括将多核设备的流分配器将第一流网络流量分配给执行分组处理引擎的第一核心,并将该分组分发到该核心。 流分配器可将另一客户端和服务器之间的另一或第二流量流的分组分发到执行第二分组处理引擎的第二核心。 当接收到分配给第一核的流量流的分组(例如第三分组)时,流分发器将该分组分发到第一核心。

    APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR EFFICIENTLY POOLING CONNECTIONS BETWEEN CLIENTS AND SERVERS
    69.
    发明申请
    APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR EFFICIENTLY POOLING CONNECTIONS BETWEEN CLIENTS AND SERVERS 有权
    设备,方法和计算机程序产品,用于客户和服务器之间有效的连接

    公开(公告)号:US20110066718A1

    公开(公告)日:2011-03-17

    申请号:US12855260

    申请日:2010-08-12

    IPC分类号: G06F15/173 G06F15/16

    摘要: An apparatus, method and computer program product for efficiently pooling network client-server connections. The apparatus is implemented within an interface unit connecting a plurality of servers to the Internet, which is in turn connected to a plurality of clients. The method includes the steps of opening a connection between a first client and the interface unit; determining whether a connection between the interface unit and a server is finished being utilized by the first client; opening a connection between a second client and the interface unit; if no free connection is open between the interface unit and the server, then allowing the second client to access information on the server via the same connection utilized by the first client without waiting for the first client to initiate closing the connection; and delinking the connections between the first and second clients and the interface unit while keeping open the connection between the interface unit and the server.

    摘要翻译: 一种用于有效集中网络客户端 - 服务器连接的设备,方法和计算机程序产品。 该装置在将多个服务器连接到因特网的接口单元内实现,因特网又连接到多个客户端。 该方法包括打开第一客户端和接口单元之间的连接的步骤; 确定所述接口单元和服务器之间的连接是否被所述第一客户端利用; 打开第二客户端和接口单元之间的连接; 如果在接口单元和服务器之间没有空闲连接,则允许第二客户端通过第一客户端使用的相同连接来访问服务器上的信息,而不等待第一客户端启动关闭连接; 并且在保持打开接口单元和服务器之间的连接的同时脱离第一和第二客户端和接口单元之间的连接。

    SYSTEMS AND METHODS FOR PROVIDING A MULTI-CORE ARCHITECTURE FOR AN ACCELERATION APPLIANCE
    70.
    发明申请
    SYSTEMS AND METHODS FOR PROVIDING A MULTI-CORE ARCHITECTURE FOR AN ACCELERATION APPLIANCE 有权
    为加速器具提供多核心架构的系统和方法

    公开(公告)号:US20100284411A1

    公开(公告)日:2010-11-11

    申请号:US12766324

    申请日:2010-04-23

    IPC分类号: H04L12/56

    摘要: The present solution is related to a method for distributing flows of network traffic across a plurality of packet processing engines executing on a corresponding core of a multi-core device. The method includes receiving, by a multi-core device intermediary to clients and servers, a packet of a first flow of network traffic between a client and server. The method also includes assigning, by a flow distributor of the multi-core device, the first flow of network traffic to a first core executing a packet processing engine and distributing the packet to this core. The flow distributor may distribute packets of another or second flow of traffic between another client and server to a second core executing a second packet processing engine. When a packet for the flow of traffic assigned to the first core is received, such as a third packet, the flow distributor distributes this packet to the first core.

    摘要翻译: 本解决方案涉及在多核设备的相应核上执行的多个分组处理引擎上分配网络流量流的方法。 所述方法包括:通过多核设备中介向客户端和服务器接收在客户端和服务器之间的第一流网络流量的分组。 该方法还包括将多核设备的流分配器将第一流网络流量分配给执行分组处理引擎的第一核心,并将该分组分发到该核心。 流分配器可将另一客户端和服务器之间的另一或第二流量流的分组分发到执行第二分组处理引擎的第二核心。 当接收到分配给第一核的流量流的分组(例如第三分组)时,流分发器将该分组分发到第一核心。