摘要:
Content is dynamically assembled at the edge of the Internet, preferably on content delivery network (CDN) edge servers. A content provider leverages an “edge side include” (ESI) markup language that is used to define Web page fragments for dynamic assembly at the edge. Dynamic assembly improves site performance by caching objects that comprise dynamically-generated pages at the edge of the Internet, close to the end user. Instead of being assembled by an application/web server in a centralized data center, the application/web server sends a page template and content fragments to a CDN edge server where the page is assembled. Each content fragment can have its own cacheability profile to manage the “freshness” of the content. Once a user requests a page, the edge server examines its cache for the included fragments and assembles the page on-the-fly.
摘要:
Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.
摘要:
A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
摘要:
Business applications running on a content delivery network (CDN) having a distributed application framework can create, access and modify state for each client. Over time, a single client may desire to access a given application on different CDN edge servers within the same region and even across different regions. Each time, the application may need to access the latest “state” of the client even if the state was last modified by an application on a different server. A difficulty arises when a process or a machine that last modified the state dies or is temporarily or permanently unavailable. The present invention provides techniques for migrating session state data across CDN servers in a manner transparent to the user. A distributed application thus can access a latest “state” of a client even if the state was last modified by an application instance executing on a different CDN server, including a nearby (in-region) or a remote (out-of-region) server.
摘要:
A method and system of load balancing application server resources operating in a distributed set of servers is described. In a representative embodiment, the set of servers comprise a region of a content delivery network. Each server in the set typically includes a server manager process, and an application server on which edge-enabled applications or application components are executed. As service requests are directed to servers in the region, the application servers manage the requests in a load-balanced manner, and without any requirement that a particular application server spawned on-demand.
摘要:
A method is provided for scheduling execution of a plurality of threads executed in a multithreaded processor. Resource utilizations of each of the plurality of threads are measured while the plurality of threads are concurrently executing in the multithreaded processor. Each of the plurality of threads is scheduled according to the measured resource utilizations using a thread scheduler.
摘要:
A method is provided for guiding virtual-to-physical mapping policies in a computer system including a processor and a memory. State information is randomly sampled from selected memory references in a stream of memory references issued by the processor to the memory. Cache hit/miss status, translation-look-aside buffer hit/miss status, and effective virtual and physical memory addresses of the sampled memory references are recorded in a profile record. The recorded information is aggregated by virtual memory address, and a new virtual-to-physical mapping is choosen to reduce cache and translation-look-aside buffer miss rates.
摘要:
An apparatus is provided for sampling instructions in a processor pipeline of a computer system. The pipeline has a plurality of processing stages. Instructions are fetched into a first stage of the pipeline. A subset of the fetched instructions are identified as selected instructions. Event, latency, and state information of the system is sampled while any of the selected instructions are in any stage of the pipeline. Software is informed whenever any of the selected instructions leaves the pipeline to read the event and latency information.
摘要:
A processor includes an execution pipeline and a retire unit coupled to an end of the execution pipeline. The processor executes instructions of a program. An apparatus for collecting performance data while the instructions are executing includes a register coupled to the retire unit of the processor. Means are provided for incrementing the register whenever an instruction is retired from the execution pipeline. In addition, the apparatus includes means for generating an interrupt to an interrupt handler whenever the register is incremented to a predetermined value.
摘要:
In a method for scheduling instructions executed in a computer system including a processor and a memory subsystem, pipeline latencies and resource utilization are measured by sampling hardware while the instructions are executing. The instructions are then scheduled according to the measured latencies and resource utilizations using an instruction scheduler.