Abstract:
Systems, methods, and computer programs are disclosed for reducing memory power consumption. An exemplary method comprises configuring a power saving memory balloon associated with a volatile memory. Memory allocations are steered to the power saving memory balloon. In response to initiating a memory power saving mode, data is migrated from the power saving memory balloon. A power saving feature is executed on the power saving memory balloon while in the memory power saving mode.
Abstract:
A first processor and a second processor are configured to communicate secure inter-processor communications (IPCs) with each other. The first processor effects secure IPCs and non-secure IPCs using a first memory management unit (MMU) to route the secure and non-secure IPCs via a memory system. The first MMU accesses a first page table stored in the memory system to route the secure IPCs and accesses a second page table stored in the memory system to route the non-secure IPCs. The second processor effects at least secure IPCs using a second MMU to route the secure IPCs via the memory system. The second MMU accesses the second page table to route the secure IPCs.
Abstract:
Systems and methods are provided for managing performance of a computing device having dissimilar memory types. An exemplary embodiment comprises a method for interleaving dissimilar memory devices. The method involves determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio. Memory address requests are distributed from one or more processing units to the dissimilar memory devices according to the interleave bandwidth ratio.
Abstract:
Hardware-based translation lookaside buffer (TLB) invalidation techniques are disclosed. A host system is configured to exchange data with a peripheral component interconnect express PCIE) endpoint (EP). A memory management unit (MMU), which is a hardware element, is included in the host system to provide address translation according to at least one TLB. In one aspect, the MMU is configured to invalidate the at least one TLB in response to receiving at least one TLB invalidation command from the PCIE EP. In another aspect, the PCIE EP is configured to determine that the at least one TLB needs to be invalidated and provide the TLB invalidation command to invalidate the at least one TLB. By implementing hardware-based TLB invalidation in the host system, it is possible to reduce TLB invalidation delay, thus leading to increased data throughput, reduced power consumption, and improved user experience.
Abstract:
In the various aspects, virtualization techniques may be used to improve performance and reduce the amount of power consumed by selectively enabling a hypervisor operating on a computing device during sandbox sessions. In the various aspects, a high-level operating system may allocate memory such that its intermediate physical addresses are equal to the physical addresses. When the hypervisor is disabled, the hypervisor may suspend second stage translations from intermediate physical addresses to physical addresses. During a sandbox session, the hypervisor may be enabled and resume performing second stage translations.
Abstract:
Aspects include apparatuses and methods for secure, fast and normal virtual interrupt direct assignment managing secure and non-secure, virtual and physical interrupts by processor having a plurality of execution environments, including a trusted (secure) and a non-secure execution environment. An interrupt controller may identify a security group value for an interrupt and direct secure interrupts to the trusted execution environment. The interrupt controller may identify a direct assignment value for the non-secure interrupts indicating whether the non-secure interrupt is owned by a high level operating system (HLOS) Guest or a virtual machine manager (VMM), and whether it is a fast or a normal virtual interrupt. The interrupt controller may direct the HLOS Guest owned interrupt to the HLOS Guest while bypassing the VMM. When the HLOS Guest in unavailable, the interrupt may be directed to the VMM to attempt to pass the interrupt to the HLOS Guest until successful.
Abstract:
Systems, methods, and computer programs are disclosed for reducing memory power consumption. An exemplary method comprises configuring a power saving memory balloon associated with a volatile memory. Memory allocations are steered to the power saving memory balloon. In response to initiating a memory power saving mode, data is migrated from the power saving memory balloon. A power saving feature is executed on the power saving memory balloon while in the memory power saving mode.
Abstract:
A first processor and a second processor are configured to communicate secure inter-processor communications (IPCs) with each other. The first processor effects secure IPCs and non-secure IPCs using a first memory management unit (MMU) to route the secure and non-secure IPCs via a memory system. The first MMU accesses a first page table stored in the memory system to route the secure IPCs and accesses a second page table stored in the memory system to route the non-secure IPCs. The second processor effects at least secure IPCs using a second MMU to route the secure IPCs via the memory system. The second MMU accesses the second page table to route the secure IPCs.
Abstract:
A computer system and a method are provided that reduce the amount of time and computing resources that are required to perform a hardware table walk (HWTW) in the event that a translation lookaside buffer (TLB) miss occurs. If a TLB miss occurs when performing a stage 2 (S2) HWTW to find the PA at which a stage 1 (S1) page table is stored, the MMU uses the IPA to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions.
Abstract:
Systems for a method for monolithic workload scheduling in a portable computing device (“PCD”) having a hypervisor are disclosed. An exemplary method comprises instantiating a primary virtual machine at a first exception level, wherein the primary virtual machine comprises a monolithic scheduler configured to allocate workloads within and between one or more guest virtual machines in response to one or more interrupts, instantiating a secure virtual machine at the first exception level and instantiating one or more guest virtual machines at the first exception level as well. When an interrupt is received at a hypervisor associated with a second exception level, the interrupt is forwarded to the monolithic scheduler along with hardware usage state data and guest virtual machine usage state data. The monolithic scheduler may, in turn, generate one or more context switches which may comprise at least one intra-VM context switch and at least one inter-VM context switch.