摘要:
A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.
摘要:
A content file purge mechanism for a content delivery network (CDN) is described. A Web-enabled portal is used by CDN customers to enter purge requests securely. A purge request identifies one or more content files to be purged. The purge request is pushed over a secure link from the portal to a purge server, which validates purge requests from multiple CDN customers and batches the requests into an aggregate purge request. The aggregate purge request is pushed from the purge server to a set of staging servers. Periodically, CDN content servers poll the staging servers to determine whether an aggregate purge request exists. If so, the CDN content servers obtain the aggregate purge request and process the request to remove the identified content files from their local storage.
摘要:
A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.
摘要:
An edge server in a distributed processing environment includes at least one process that manages incoming client requests and selectively forwards given service requests to other servers in the distributed network. According to the invention, the edge server includes storage (e.g., disk and/or memory) in which at least one forwarding queue is established. The server includes code for aggregating service requests in the forwarding queue and then selectively releasing the service requests, or some of them, to another server. The forward request queuing mechanism preferably is managed by metadata, which, for example, controls how many service requests may be placed in the queue, how long a given service request may remain in the queue, what action to take in response to a client request if the forwarding queue's capacity is reached, and the like. In one embodiment, the server generates an estimate of a current load on an origin server (to which it is sending forwarding requests) and instantiates the forward request queuing when that current load is reached.
摘要:
A file transport mechanism according to the invention is responsible for accepting, storing and distributing files, such as configuration or control files, to a large number of field machines. The mechanism is comprised of a set of servers that accept, store and maintain submitted files. The file transport mechanism implements a distributed agreement protocol based on “vector exchange.” A vector exchange is a knowledge-based algorithm that works by passing around to potential participants a commitment bit vector. A participant that observes a quorum of commit bits in a vector assumes agreement. Servers use vector exchange to achieve consensus on file submissions. Once a server learns of an agreement, it persistently marks (in a local data store) the request as “agreed.” Once the submission is agreed, the server can stage the new file for download.
摘要:
A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server is the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server be spawned on-demand.
摘要:
Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.
摘要:
An application deployment model for enterprise applications to enable such applications to be deployed to and executed from a globally distributed computing platform, such as an Internet content delivery network (CDN). According to the invention, application developers separate their Web application into two layers: a highly distributed edge layer and a centralized origin layer. In a representative embodiment, the edge layer supports a servlet container that executes a Web tier, typically the presentation layer of a given Java-based application. Where necessary, the edge layer communicates with code running on an origin server to respond to a given request. In an alternative embodiment, the edge layer supports a more fully-provisioned application server that executes both Web tier (e.g., presentation) and Enterprise tier application (e.g., business logic) components. In either case, the inventive framework enables one or more different applications to be deployed to and executed from the edge server on behalf of one or more respective entities.
摘要:
A method estimates statistics of properties of transactions processed by a memory sub-system of a computer system. The method randomly selects memory transactions processed by the memory sub-system. States of the system are recorded as samples while the selected transaction are processed by the memory sub-system. The recorded states from a subset of the selected transactions are statistically analyzed to estimate statistics of the memory transactions.
摘要:
In a computer system, an apparatus is configured to collect performance data of a computer system including a plurality of processors for concurrently executing instructions of a program. A plurality of performance counters are coupled to each processor. The performance counters store performance data generated by each processor while executing the instructions. An interrupt handler executes on each processors, the interrupt handler samples the performance data of the processor in response to interrupts. A first memory includes a hash table associated with each interrupt handler, the hash table stores the performance data sampled by the interrupt handler executing on the processor. A second memory includes an overflow buffer, the overflow buffer stores the performance data while portions of the hash tables are active or full. A third memory includes a user buffer, and means are provided for periodically flushing the performance data from the hash tables and the overflow to the user buffer.