摘要:
Two or more processors that each provides a specified thread to access a shared resource that can only be accessed by one thread at a given time. A locking mechanism enables one of the threads to access the shared resource while other threads are retained in a waiting queue. Responsive to an additional thread that is not one of the specified threads being provided access the shared resource during an identified time period, and responsive to a first criterion an a second criterion being met, the additional thread accesses the shared resource before the other threads in the waiting queue.
摘要:
Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource.
摘要:
A method of dynamically reallocating memory affinity in a virtual machine after migrating the virtual machine from a source computer system to a destination computer system migrates processor states and resources used by the virtual machine from the source computer system to the destination computer system. The method maps memory of the virtual machine to processor nodes of the destination computer system. The method deletes memory mappings in processor hardware, such as translation lookaside buffers and effective-to-real address tables, for the virtual machine on the destination computer system. The method starts the virtual machine on the destination computer system in virtual real memory mode. A hypervisor running on the destination computer system receives a page fault and virtual address of a page for said virtual machine from a processor of the destination computer system and determines if the page is in local memory of the processor. If the hypervisor determines the page to be in the local memory of the processor, the hypervisor returning a physical address mapping for the page to the processor. If the hypervisor determines the page not to be in the local memory of the processor, the hypervisor moves the page to local memory of the processor and returns a physical address mapping for said page to the processor.
摘要:
A method includes atomically reading a next field of a current element of the linked list to determine a first value that encodes a first pointer to the first element and a first indication of an owner of the first element. The first indication of the owner is stored in a first of a plurality of multi-field reservation data structures. The operation includes determining whether the next field of the current element still indicates the first value. The operation includes reading the first element of the linked list via the first pointer if the next field of the current element still indicates the first value. If the next field of the current element indicates a current value different than the first value, the first indication of the owner is removed from the first multi-field reservation data structure, and storing and determining with the second value is repeated.
摘要:
Disclosed is a computer implemented method and computer program product to prioritize paging-in pages in a remote paging device. An arrival machine receives checkpoint data from a departure machine. The arrival machine restarts at least one process corresponding to the checkpoint data. The arrival machine determines whether a page associated with the process is pinned. The arrival machine associates the page to the remote paging device, responsive to a determination that the page is pinned. The arrival machine touches the page.
摘要:
Disclosed is a computer implemented method, apparatus and computer program product for communicating virtual memory page status to a virtual memory manager. An operating system may receive a request to free a virtual memory page from a first application. The operating system determines whether the virtual memory page is free due to an operating system page replacement. Responsive to a determination that the virtual memory page is free due to the operating system page replacement, the operating system inhibits marking the virtual memory page as unused. Finally, the operating system may insert the virtual memory page on an operating system free list.
摘要:
In one embodiment a method for migrating a workload from one processing resource to a second processing resource of a computing platform is disclosed. The method can include a command to migrate a workload that is processing and the process can be interrupted and some memory processes can be frozen in response to the migration command. An index table can be created that identifies memory locations that determined where the process was when it is interrupted. Table data, pinned page data, and non-private process data can be sent to the second processing resource. Contained in this data can be restart type data. The second resource or target resource can utilize this data to restart the process without the requirement of bulk data transfers providing an efficient migration process. Other embodiments are also disclosed.
摘要:
Controlled partition shut-down is provided within a shared memory partition data processing system including a shared memory partition, a paging service partition, a hypervisor and a shared memory pool within physical memory. The hypervisor manages access to logical pages within the pool and page-out of pages from the pool to external paging storage via the paging service partition. A respective paging service stream exists between the paging service partition and hypervisor for each shared memory partition, with each stream including a stream state. The control method includes: responsive to a shut-down initiating event, notifying the paging service partition to shut down, and determining whether a shared memory partition is currently active, and if so, signaling the hypervisor to complete paging activity for the active memory partition and waiting for its stream state to enter a suspended or a completed state before automatically shutting down the paging service partition.
摘要:
Disclosed is a computer implemented method and computer program product to prioritize paging-in pages in a remote paging device. An arrival machine receives checkpoint data from a departure machine. The arrival machine restarts at least one process corresponding to the checkpoint data. The arrival machine determines whether a page associated with the process is pinned. The arrival machine associates the page to the remote paging device, responsive to a determination that the page is pinned. The arrival machine touches the page.
摘要:
A method, system and computer program product for allocating real memory to virtual memory page sizes when all real memory is in use includes, in response to a page fault, selecting a page frame for a virtual page. In response to determining that said page does not represent a new page, a page is paged-in into said page frame and a repaging rate for a page size of the page is modified in a repaging rates data structure.