摘要:
A Registry, such as a UDDI registry, dynamically manages (e.g. filters and/or re-orders) answers to service queries by Service Consumers based at least on one or more of the individual states/status of Service Providers, the collective service environment state, and policies employed by the environment. The Registry may be configured to infer operational state/status about a Service Provider, such as impending unavailability due to very low battery reserves, and to remove providers from the registry if determined to be unavailable. The Registry may be configured to associate a shelf-life with a provider registration based on characteristics of the Service Provider, or based on past experience with the Service Provider. Such dynamic management allows the Registry to implement intelligent task distribution and load balancing between Service Providers, and to insulate Service Providers on fragile platforms (e.g. notebooks, handhelds, etc.) that may otherwise be overwhelmed by offering themselves as a traditional provider.
摘要:
Currently, global registries, such as ones offered by Microsoft Corporation (uddi.microsoft.com) or Hewlett Packard Corporation (uddi.hp.com), are used to register services offered by or desired by networked devices. Unfortunately, these registries are highly centralized and designed to be repositories for long-lived services, and thus are not amenable to operation of mobile devices, such as laptop computers, personal digital assistants, and other devices whose network address may change frequently as they move in and out of various local network environments. Moreover, when such mobile devices form ad hoc networks, access to the centralized repositories may not be available. To address these issues, devices of a local network may be configured to dynamically select a local master, from among devices attached to the local network and based on characteristics of the devices, where the selected device operates a registry for the local network.
摘要:
According to an embodiment of the invention, a method for operating a data processing machine is described in which data about a state of the machine is written to a location in storage. The location is one that is accessible to software that may be written for the machine. The state data as written is encoded. This state data may be recovered from the storage according to a decoding process. Other embodiments are also described and claimed.
摘要:
According to an embodiment of the invention, a method for operating a data processing machine is described in which data about a state of the machine is written to a location in storage. The location is one that is accessible to software that may be written for the machine. The state data as written is encoded. This state data may be recovered from the storage according to a decoding process. Other embodiments are also described and claimed.
摘要:
Methods and systems for performing microcode patching are presented. In one embodiment, a data processing system comprises a cache memory and a processor. The cache memory comprises a plurality of cache sections. The processor sequesters one or more cache sections of the cache memory and stores processor microcode therein. In one embodiment, the processor executes the microcode in the one or more cache sections.
摘要:
In an embodiment, memory access requests for information stored within a system memory pass through an integrated circuit. The system memory may include a micro-architectural memory region to store instructions and/or data, where the micro-architectural memory region is to be exclusively accessible by a micro-architectural agent The integrated circuit may include memory access director to direct memory access requests to the micro-architectural memory region if the memory access director determines that the memory access request includes a location within the at least one micro-architectural memory region and the micro-architectural agent is operating in a micro-architectural memory region access mode.
摘要:
The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.