Method and system for purging content from a content delivery network
    2.
    发明授权
    Method and system for purging content from a content delivery network 有权
    从内容传送网络中清除内容的方法和系统

    公开(公告)号:US08266305B2

    公开(公告)日:2012-09-11

    申请号:US11522557

    申请日:2006-09-18

    IPC分类号: G06F15/16

    摘要: A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.

    摘要翻译: 描述了用于内容传送网络(CDN)的内容文件清除机制。 CDN客户使用Web启用门户安全地输入清除请求。 清除请求标识要清除的一个或多个内容文件。 清除请求被推送到从门户到清除服务器的安全链路,该服务器验证来自多个CDN客户的清除请求,并将请求批处理为汇总清除请求。 汇总清除请求从清除服务器推送到一组暂存服务器。 定期地,CDN内容服务器轮询登台服务器以确定是否存在聚合清除请求。 如果是这样,CDN内容服务器获得聚合清除请求,并处理从其本地存储中删除标识的内容文件的请求。

    Highly scalable, fault tolerant file transport using vector exchange
    3.
    发明授权
    Highly scalable, fault tolerant file transport using vector exchange 有权
    使用向量交换的高度可扩展的容错文件传输

    公开(公告)号:US07958249B2

    公开(公告)日:2011-06-07

    申请号:US12848293

    申请日:2010-08-02

    IPC分类号: G06F15/16 G06F17/30

    CPC分类号: G06F17/30067

    摘要: A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.

    摘要翻译: 根据本发明的文件传输机制负责接收,存储和分发诸如配置或控制文件的文件到大量的现场机器。 该机制由一组接受,存储和维护提交的文件的服务器组成。 文件传输机制基于“向量交换”实现分布式协议协议。向量交换是一种基于知识的算法,通过向潜在参与者传递承诺位向量。 在向量中观察到提交位的法定值的参与者假定一致。 服务器使用向量交换来达成文件提交的共识。 一旦服务器了解协议,它就会将请求标记为(在本地数据存储中)“同意”。一旦提交同意,服务器可以对新文件进行下载。

    Forward request queuing in a distributed edge processing environment
    4.
    发明授权
    Forward request queuing in a distributed edge processing environment 有权
    在分布式边缘处理环境中转发请求排队

    公开(公告)号:US08423662B1

    公开(公告)日:2013-04-16

    申请号:US10833449

    申请日:2004-04-28

    IPC分类号: G06F15/173

    摘要: An edge server in a distributed processing environment includes at least one process that manages incoming client requests and selectively forwards given service requests to other servers in the distributed network. According to the invention, the edge server includes storage (e.g., disk and/or memory) in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the service requests, or some of them, to another server. The forward request queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, and the like. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.

    摘要翻译: 分布式处理环境中的边缘服务器包括至少一个管理传入客户端请求并选择性地将给定服务请求转发到分布式网络中的其他服务器的进程。 根据本发明,边缘服务器包括其中建立至少一个转发队列的存储(例如,磁盘和/或存储器)。 服务器包括用于在转发队列中聚合服务请求的代码,然后选择性地将服务请求或其中一些请求释放到另一个服务器。 前向请求排队机制优选地由元数据管理,元数据例如控制队列中可能放置多少服务请求,给定服务请求可以保留在队列中多长时间,响应于客户端请求采取什么动作 如果达到转发队列的容量等等。 在一个实施例中,服务器产生对原始服务器(其正在向其发送转发请求)的当前负载的估计,并且在达到当前负载时实例化前向请求排队。

    Highly scalable, fault tolerant file transport using vector exchange
    5.
    发明申请
    Highly scalable, fault tolerant file transport using vector exchange 有权
    使用向量交换的高度可扩展的容错文件传输

    公开(公告)号:US20100293229A1

    公开(公告)日:2010-11-18

    申请号:US12848293

    申请日:2010-08-02

    IPC分类号: G06F15/16

    CPC分类号: G06F17/30067

    摘要: A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.

    摘要翻译: 根据本发明的文件传输机制负责接收,存储和分发诸如配置或控制文件的文件到大量的现场机器。 该机制由一组接受,存储和维护提交的文件的服务器组成。 文件传输机制基于“向量交换”实现分布式协议协议。向量交换是一种基于知识的算法,通过向潜在参与者传递承诺位向量。 在向量中观察到提交位的法定值的参与者假定一致。 服务器使用向量交换来达成文件提交的共识。 一旦服务器了解协议,它就会将请求标记为(在本地数据存储中)“同意”。一旦提交同意,服务器可以对新文件进行下载。

    Method of load balancing edge-enabled applications in a content delivery network (CDN)
    6.
    发明授权
    Method of load balancing edge-enabled applications in a content delivery network (CDN) 有权
    在内容传送网络(CDN)中负载平衡启用边缘的应用程序的方法

    公开(公告)号:US07660896B1

    公开(公告)日:2010-02-09

    申请号:US10823871

    申请日:2004-04-14

    IPC分类号: G06F15/173

    CPC分类号: G06F9/505 G06F9/5083

    摘要: A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server is the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server be spawned on-demand.

    摘要翻译: 描述了在分布式服务器集中操作的负载平衡应用服务器资源的方法和系统。 在代表性实施例中,服务器组包括内容传送网络的区域。 每个服务器是集合,通常包括服务器管理器进程,以及执行边缘启用的应用程序或应用程序组件的应用程序服务器。 随着服务请求被引导到该区域的服务器,应用服务器以负载平衡的方式管理请求,并且不需要根据需要产生特定的应用服务器。

    High frequency sampling of processor performance counters
    10.
    发明授权
    High frequency sampling of processor performance counters 失效
    处理器性能计数器的高频采样

    公开(公告)号:US5796939A

    公开(公告)日:1998-08-18

    申请号:US812899

    申请日:1997-03-10

    摘要: In a computer system, an apparatus is configured to collect performance data of a computer system including a plurality of processors for concurrently executing instructions of a program. A plurality of performance counters are coupled to each processor. The performance counters store performance data generated by each processor while executing the instructions. An interrupt handler executes on each processors, the interrupt handler samples the performance data of the processor in response to interrupts. A first memory includes a hash table associated with each interrupt handler, the hash table stores the performance data sampled by the interrupt handler executing on the processor. A second memory includes an overflow buffer, the overflow buffer stores the performance data while portions of the hash tables are active or full. A third memory includes a user buffer, and means are provided for periodically flushing the performance data from the hash tables and the overflow to the user buffer.

    摘要翻译: 在计算机系统中,装置被配置为收集包括多个处理器的计算机系统的性能数据,用于并行执行程序的指令。 多个性能计数器耦合到每个处理器。 性能计数器存储执行指令时由每个处理器生成的性能数据。 中断处理程序在每个处理器上执行,中断处理程序响应中断来采样处理器的性能数据。 第一存储器包括与每个中断处理程序相关联的散列表,散列表存储由处理器上执行的中断处理程序采样的性能数据。 第二存储器包括溢出缓冲器,溢出缓冲器存储性能数据,而哈希表的一部分是活动的或是满的。 第三存储器包括用户缓冲器,并且提供用于周期性地从哈希表中刷新性能数据并向用户缓冲器溢出的装置。