摘要:
A mechanism for priority control in resource allocation for low request rate, latency-sensitive units is provided. With this mechanism, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
摘要:
A method and apparatus are provided for efficiently managing limited resources is a given computer system. The system utilizes a token manager that assigns tokens to groups of associated requestors. The tokens are then utilized by the requesters to occupy the given resource. The allocation of these tokens, thus, prevents such problems as denial of service due to a lack of available resources.
摘要:
A method and apparatus for the prevention of unwanted access to secure areas of memory during the POR or boot sequence of a CPU. Via control within the CPU, commands that are sent to and received by the CPU prior to the finish of the POR sequence can be denied I/O address translation, thus protecting memory during the POR sequence. Furthermore, an error response can be generated in the CPU and sent back to the I/O device which issued the command.
摘要:
An I/O address translation apparatus and method for specifying relaxed ordering for I/O accesses are provided. With the apparatus and method, storage ordering (SO) bits are provided in an I/O address translation data structure, such as a page table or segment table. These SO bits define the order in which reads and/or writes initiated by an I/O device may be performed. These SO bits are combined with an ordering bit, e.g., the Relaxed Ordering Attribute bit of PCI Express, on the I/O interface. The weaker ordering indicated either in the I/O address translation data structure or in the I/O interface relaxed ordering bit is used to control the order in which I/O operations may be performed.
摘要:
The present invention provides a mechanism for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
The present invention provides a mechanism of storing data transferred from an I/O device, a network, or a disk into a portion of a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. In an embodiment of the invention, a processor can write data to the cache or other fast memory without also writing it to main memory. The portion of the cache or other fast memory can be used as additional system memory.
摘要:
A system for sharing memory by heterogeneous processors, each of which is adapted to process its own instruction set, is presented. A common bus is used to couple the common memory to the various processors. In one embodiment, a cache for more than one of the processors is stored in the shared memory. In another embodiment, some of the processors include a local memory area that is mapped to the shared memory pool. In yet another embodiment, local memory included on one or more of the processors is partially shared so that some of the local memory is mapped to the shared memory area, while remaining memory in the local memory is private to the particular processor.
摘要:
The present invention provides a method of storing data transferred from an I/O device, a network, or a disk into a portion of a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. In an embodiment of the invention, a processor can write data to the cache or other fast memory without also writing it to main memory. The portion of the cache or other fast memory can be used as additional system memory.
摘要:
A method, system, apparatus, and article of manufacture for performing cacheline polling utilizing a store and reserve instruction are disclosed. In accordance with one embodiment of the present invention, a first process initially-requests an action to be performed by a second process. A reservation is set at a cacheable memory location via a store operation. The first process reads the cacheable memory location via a load operation to determine whether or not the requested action has been completed by the second process. The load operation of the first process is stalled until the reservation on the cacheable memory location is lost. After the requested action has been completed, the reservation in the cacheable memory location is reset by the second process.
摘要:
The present invention provides for a system comprising a DMA queue configured to receive a DMA command comprising a tag, wherein the tag belongs to one of a plurality of tag groups. A counter couples to the DMA queue and is configured to increment a tag group count of the tag group to which the tag belongs upon receipt of the DMA command by the DMA queue and to decrement the tag group count upon execution of the DMA command. A tag group count status register couples to the counter and is configured to store the tag group count for each of the plurality of tag groups. And the tag group count status register is further configured to receive a request for a tag group status and to respond to the request for the tag group status.