BYTE RANGE CACHING
    2.
    发明申请
    BYTE RANGE CACHING 有权
    字节范围高速缓存

    公开(公告)号:US20100318632A1

    公开(公告)日:2010-12-16

    申请号:US12485090

    申请日:2009-06-16

    IPC分类号: G06F15/16 G06F12/08

    摘要: A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.

    摘要翻译: 高速缓存系统将内容分成由高速缓存服务器缓存的多个单独可高速缓存的块,该高速缓存服务器缓存部分内容,并向原始服务器提供低延迟和较少重复请求的字节范围请求。 系统从客户端接收内容资源的字节范围的请求。 系统确定与指定字节范围重叠的块,并向原始服务器发送尚未存储在高速缓存中的重叠块的字节范围请求。 系统将接收到的响应的字节作为块存储在高速缓存中,并使用存储在高速缓存中的块来响应接收到的请求。 该系统提供与先前请求的字节范围重叠的后续请求,这些请求范围已经从高速缓存中检索到的块中,并且只向原始服务器请求客户端以前未请求的那些块。

    Proactive load balancing
    3.
    发明授权
    Proactive load balancing 有权
    主动负载均衡

    公开(公告)号:US08073952B2

    公开(公告)日:2011-12-06

    申请号:US12427774

    申请日:2009-04-22

    IPC分类号: G06F15/173

    CPC分类号: H04L67/1008 H04L67/1002

    摘要: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.

    摘要翻译: 这里描述了一种负载平衡系统,它使用关于每个目的地服务器上的预期负载或事件的信息来主动平衡多个目的地服务器之间的客户端请求以通知负载平衡决定。 系统检测将影响目标服务器请求的性能和/或容量的一个或多个即将到来的事件。 在检测到事件时,系统通知负载平衡器在事件发生的时间内排除连接。 接下来,事件发生在目标服务器上,系统检测事件何时完成。 作为响应,系统通知负载均衡器恢复与目标服务器的连接。 这样,在任务发生之前,系统能够将客户端重定向到其他可用的目标服务器。 因此,负载平衡系统提供更有效的客户端请求路由并提高响应能力。

    Network caching for multiple contemporaneous requests
    4.
    发明授权
    Network caching for multiple contemporaneous requests 有权
    多个同时期请求的网络缓存

    公开(公告)号:US08046432B2

    公开(公告)日:2011-10-25

    申请号:US12425395

    申请日:2009-04-17

    IPC分类号: G06F15/16 G06F15/167

    摘要: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.

    摘要翻译: 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。

    PROACTIVE LOAD BALANCING
    5.
    发明申请
    PROACTIVE LOAD BALANCING 有权
    主动负载平衡

    公开(公告)号:US20100274885A1

    公开(公告)日:2010-10-28

    申请号:US12427774

    申请日:2009-04-22

    IPC分类号: G06F15/16 G06F15/173

    CPC分类号: H04L67/1008 H04L67/1002

    摘要: A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.

    摘要翻译: 这里描述了一种负载平衡系统,它使用关于每个目的地服务器上的预期负载或事件的信息来主动平衡多个目的地服务器之间的客户端请求以通知负载平衡决定。 系统检测将影响目标服务器请求的性能和/或容量的一个或多个即将到来的事件。 在检测到事件时,系统通知负载平衡器在事件发生的时间内排除连接。 接下来,事件发生在目标服务器上,系统检测事件何时完成。 作为响应,系统通知负载均衡器恢复与目标服务器的连接。 这样,在任务发生之前,系统能够将客户端重定向到其他可用的目标服务器。 因此,负载平衡系统提供更有效的客户端请求路由并提高响应能力。

    NETWORK CACHING FOR MULTIPLE CONTEMPORANEOUS REQUESTS
    6.
    发明申请
    NETWORK CACHING FOR MULTIPLE CONTEMPORANEOUS REQUESTS 有权
    网络播放多个同步请求

    公开(公告)号:US20100268789A1

    公开(公告)日:2010-10-21

    申请号:US12425395

    申请日:2009-04-17

    IPC分类号: G06F15/167

    摘要: A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.

    摘要翻译: 这里描述了实时缓存系统,其减少了用于服务实况内容的原始服务器的负担。 响应于接收到导致高速缓存未命中的第一请求,系统将第一请求转发到下一层,同时“保持”其他对相同内容的请求。 如果系统在第一个请求未决时接收到第二个请求,则系统将识别出类似的请求未完成,并且通过不将请求转发给原始服务器来保持第二个请求。 在第一个请求的响应从下一个层次到达之后,系统与其他持有的请求共享响应。 因此,实时缓存系统允许内容提供商通过添加更多的高速缓存硬件和构建缓存服务器网络来准备非常大的事件,而不是增加源服务器的容量。

    MODULAR EXTERNAL INFUSION DEVICE
    10.
    发明申请
    MODULAR EXTERNAL INFUSION DEVICE 有权
    模块外部输液装置

    公开(公告)号:US20120101474A1

    公开(公告)日:2012-04-26

    申请号:US13339316

    申请日:2011-12-28

    IPC分类号: A61M5/168

    摘要: A modular external infusion device that controls the rate a fluid is infused into an individual's body, which includes a first module and a second module. More particularly, the first module may be a pumping module that delivers a fluid, such as a medication, to a patient while the second module may be a programming module that allows a user to select pump flow commands. The second module is removably attachable to the first module.

    摘要翻译: 一种模块化外部输注装置,其控制流体输入到个人身体中的速率,其包括第一模块和第二模块。 更具体地,第一模块可以是将诸如药物的流体传送到患者的泵送模块,而第二模块可以是允许用户选择泵流量命令的编程模块。 第二模块可拆卸地附接到第一模块。