摘要:
A sever includes an SVP 206 which is a control device capable of executing processing independent of a CPU, and an interrupt 314 is periodically applied to the CPU. An SVP device driver 303, being triggered by the interrupt 314, acquires an amount of server resources used, such as CPU operating time and memory capacity, and user information used by using an API provided by an OS, and delivers such information to an SVP 206. The SVP 206 transmits resource information thus delivered to an accounting server via an NIC 211. The accounting server accounts an amount of server resource used on a user basis and bills the users according to the quantity of server resource used. With such arrangement, it is possible to accurately grasp the usage status of server resource regardless of server loading, and further, it is possible to execute accounting of server resources and billing based on such accounting regardless of types of OS or CPU.
摘要:
Arbitration of IO accesses and band control based on the priority of virtual servers is enabled while curbing performance overhead during IO sharing among the virtual servers. A virtual machine system includes a CPU, a memory, a hypervisor that generates plural virtual servers, and an IO controller that controls an IO interface. The IO controller includes: a DMA receiving unit that receives DMA requests from the IO interface; a decoder that decodes received DMA requests and locates the corresponding virtual servers; a DMA monitoring counter that monitors DMA processing status for each of the virtual servers; a threshold register set in advance for each of the virtual servers; and a priority deciding unit that compares the DMA monitoring counter and the value of the threshold register, and based on processing priority obtained as a result of the comparison, decides the priority of processing of the received DMA requests.
摘要:
When a subject of access of a transaction from an IO device is not any resource allocated to a logical partition to which the device having issued the transaction belongs, a report as an error is sent to a CPU, while the transaction is finished on the IO bus. To prevent a transaction between IO devices from gaining access to any resource in another logical partition, one access permission bit is provided for each combination of all the IO devices, and the access is permitted only when the bit has a predetermined value. A reset signal is provided by IO slot so that only an IO slot allocated to a specific logical partition can be reset without affecting any other logical partition. A transaction issued from an IO device in one logical partition is prevented from gaining access to a resource in another logical partition, while proper error handling can be performed.
摘要:
The present invention provides a machine system that enables the arbitration of IO accesses and band control based on the priority of virtual servers while curbing performance overhead during IO sharing among the virtual servers. A virtual machine system including a CPU, a memory, and an IO interface includes a hypervisor that generates plural virtual servers, and an IO controller that controls the IO interface. The IO controller includes: a DMA receiving unit that receives DMA requests from the IO interface; a decoder that decodes received DMA requests and locates the corresponding virtual servers; a DMA monitoring counter that monitors DMA processing status for each of the virtual servers; a threshold register set in advance for each of the virtual servers; and a priority deciding unit that compares the DMA monitoring counter and the value of the threshold register, and based on processing priority obtained as a result of the comparison, decides the priority of processing of the received DMA requests.
摘要:
Provided is a computer system in which an I/O card is shared among physical servers and logical servers. Servers are set in advance such that one I/O card is used exclusively by one physical or logical server, or shared among a plurality of servers. An I/O hub allocates a virtual MM I/O address unique to each physical or logical server to a physical MM I/O address associated with each I/O card. The I/O hub keeps allocation information indicating the relation between the allocated virtual MM I/O address, the physical MM I/O address, and a server identifier unique to each physical or logical server. When a request to access an I/O card is sent from a physical or logical server, the allocation information is referred to and a server identifier is extracted from the access request. The extracted server identifier is used to identify the physical or logical server that has made the access request.
摘要翻译:提供了在物理服务器和逻辑服务器之间共享I / O卡的计算机系统。 服务器预先设置,使得一个I / O卡由一个物理或逻辑服务器专门使用,或者在多个服务器之间共享。 I / O集线器将每个物理或逻辑服务器唯一的虚拟MM I / O地址分配给与每个I / O卡相关联的物理MM I / O地址。 I / O集线器保持指示分配的虚拟MM I / O地址,物理MM I / O地址与每个物理或逻辑服务器唯一的服务器标识之间的关系的分配信息。 当从物理或逻辑服务器发送访问I / O卡的请求时,参考分配信息并从访问请求中提取服务器标识符。 提取的服务器标识符用于标识已进行访问请求的物理或逻辑服务器。
摘要:
When a subject of access of a transaction from an IO device is not any resource allocated to a logical partition to which the device having issued the transaction belongs, a report as an error is sent to a CPU, while the transaction is finished on the IO bus. To prevent a transaction between IO devices from gaining access to any resource in another logical partition, one access permission bit is provided for each combination of all the IO devices, and the access is permitted only when the bit has a predetermined value. A reset signal is provided by IO slot so that only an IO slot allocated to a specific logical partition can be reset without affecting any other logical partition. A transaction issued from an IO device in one logical partition is prevented from gaining access to a resource in another logical partition, while proper error handling can be performed.
摘要:
The present invention makes coordination of I/O access operations of operating systems independently running in logical partitions. In a data processing system comprising processors, a main memory, I/O slots, and a node controller, wherein the processors, the main memory, and the I/O slots are interconnected via the node controller and divided into a plurality of partitions in which individual operating systems are run simultaneously, the node controller includes a logical partition arbitration unit which stores information as to whether each logical partition is using an I/O slot and controls access from each logical partition to an I/O slot by referring to the information thus stored.
摘要:
The program attains compatibility of suppression of an overhead accompanying page exception handling in the case of operating a program whose amount of memory use is large on a virtual machine and suppression of the overhead accompanying page exception handling in the case of operating a first OS that has a function of making another OS run on a virtual machine. A VMM creates a shadow PT (Page Table) for prohibiting reading-writing of privileged memory that requires emulation of reading/writing by using a RSV-bit, and registers the shadow PT and the second PT that a second OS operating on the first OS has in an x86 compatible CPU equipped with a page exception detecting function using two PT's. When a page exception occurs, the VMM refers to a cause code of the page exception and, when a P field of the cause code is 0, determines immediately that emulation is unnecessary.
摘要:
A method is provided which eliminates redundancy from the shadow PT operation performed by the virtual machine monitor (VMM) when the guest operating system running on a virtual machine updates a guest page table (PT) address. The VMM associates a plurality of shadow PTs with guest PTs and allocates their relation in memory. When it detects the update of a guest PT address, the VMM searches for a shadow PT corresponding to the updated guest PT. If the associated shadow PT exists, the VMM omits rewriting the shadow PT and registers the address of the shadow PT with the central processing unit (CPU). If the associated shadow PT does not exist, the VMM allocates a memory, creates a shadow PT, registers an address of the created shadow PT with the CPU, and records a relationship between the updated guest PT and the generated shadow PT.
摘要:
The program attains compatibility of suppression of an overhead accompanying page exception handling in the case of operating a program whose amount of memory use is large on a virtual machine and suppression of the overhead accompanying page exception handling in the case of operating a first OS that has a function of making another OS run on a virtual machine. A VMM creates a shadow PT for prohibiting reading-writing of privileged memory that requires emulation of reading/writing by using a RSV-bit, and registers the shadow PT and the second PT that a second OS operating on the first OS has in an x86 compatible CPU equipped with a page exception detecting function using two PT's. When a page exception occurs, the VMM refers to a cause code of the page exception and, when a P field of the cause code is 0, determines immediately that emulation is unnecessary.