摘要:
Aspects include computing devices, systems, and methods for implementing a cache maintenance or status operation for a component cache of a system cache. A computing device may generate a component cache configuration table, assign at least one component cache indicator of a component cache to a master of the component cache, and map at least one control register to the component cache indicator by a centralized control entity. The computing device may store the component cache indicator such that the component cache indicator is accessible by the master of the component cache for discovering a virtualized view of the system cache and issuing a cache maintenance or status command for the component cache bypassing the centralized control entity. The computing device may receive the cache maintenance or status command by a control register associated with a cache maintenance or status command and the component cache bypassing the centralized control entity.
摘要:
In the various aspects, virtualization techniques may be used to improve performance and reduce the amount of power consumed by selectively enabling a hypervisor operating on a computing device during sandbox sessions. In the various aspects, a high-level operating system may allocate memory such that its intermediate physical addresses are equal to the physical addresses. When the hypervisor is disabled, the hypervisor may suspend second stage translations from intermediate physical addresses to physical addresses. During a sandbox session, the hypervisor may be enabled and resume performing second stage translations.
摘要:
Aspects include apparatuses and methods for secure, fast and normal virtual interrupt direct assignment managing secure and non-secure, virtual and physical interrupts by processor having a plurality of execution environments, including a trusted (secure) and a non-secure execution environment. An interrupt controller may identify a security group value for an interrupt and direct secure interrupts to the trusted execution environment. The interrupt controller may identify a direct assignment value for the non-secure interrupts indicating whether the non-secure interrupt is owned by a high level operating system (HLOS) Guest or a virtual machine manager (VMM), and whether it is a fast or a normal virtual interrupt. The interrupt controller may direct the HLOS Guest owned interrupt to the HLOS Guest while bypassing the VMM. When the HLOS Guest in unavailable, the interrupt may be directed to the VMM to attempt to pass the interrupt to the HLOS Guest until successful.
摘要:
Efficient techniques using a multi-port shared non-volatile memory are described that reduce latency in memory accesses from dedicated function specific processors, such as a modem control processor. The modem processor preempts a host processor that is accessing data from a multi-port shared non-volatile memory flash device allowing the modem processor to quickly access data in the flash device. The preemption process uses a doorbell interrupt initiated by a processor that seeks access and interrupts the processor being preempted. After preemption, the host processor may resume or restart the data access. Access control by the processors utilizes a hardware semaphore atomic control mechanism. Power control of the shared non-volatile memory modules includes at least one inactivity timer to indicate when a supply voltage to the shared non-volatile memory modules can be safely reduced or turned off. Power may be restarted by any of the processors sharing the memory, allowing fast access to the data.
摘要:
In the various aspects, virtualization techniques may be used to improve performance and reduce the amount of power consumed by selectively enabling a hypervisor operating on a computing device during sandbox sessions. In the various aspects, a high-level operating system may allocate memory such that its intermediate physical addresses are equal to the physical addresses. When the hypervisor is disabled, the hypervisor may suspend second stage translations from intermediate physical addresses to physical addresses. During a sandbox session, the hypervisor may be enabled and resume performing second stage translations.
摘要:
A first processor and a second processor are configured to communicate secure inter-processor communications (IPCs) with each other. The first processor effects secure IPCs and non-secure IPCs using a first memory management unit (MMU) to route the secure and non-secure IPCs via a memory system. The first MMU accesses a first page table stored in the memory system to route the secure IPCs and accesses a second page table stored in the memory system to route the non-secure IPCs. The second processor effects at least secure IPCs using a second MMU to route the secure IPCs via the memory system. The second MMU accesses the second page table to route the secure IPCs.
摘要:
In the various aspects, virtualization techniques may be used to improve performance and reduce the amount of power consumed by selectively enabling a hypervisor operating on a computing device during sandbox sessions. In the various aspects, a high-level operating system may allocate memory such that its intermediate physical addresses are equal to the physical addresses. When the hypervisor is disabled, the hypervisor may suspend second stage translations from intermediate physical addresses to physical addresses. During a sandbox session, the hypervisor may be enabled and resume performing second stage translations.
摘要:
Systems and methods are disclosed for managing memory access for low-power use cases of a system on chip. One such method comprises booting a system on chip (SoC) comprising a plurality of SoC processing devices. A trusted channel is created to a secure non-volatile random access memory (NVRAM). The method determines a power-saving software program to be executed on the SoC by one of the plurality of SoC processing devices. A software image associated with the power-saving software program is loaded to the secure NVRAM. In response to loading the software image to the secure NVRAM, each of the plurality of SoC processing devices except the one executing the software image from the secure NVRAM are powered down.
摘要:
A computer system and a method are provided that reduce the amount of time and computing resources that are required to perform a hardware table walk (HWTW) in the event that a translation lookaside buffer (TLB) miss occurs. If a TLB miss occurs when performing a stage 2 (S2) HWTW to find the PA at which a stage 1 (S1) page table is stored, the MMU uses the IPA to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions.
摘要:
Systems and methods are disclosed for managing memory access for low-power use cases of a system on chip. One such method comprises booting a system on chip (SoC) comprising a plurality of SoC processing devices. A trusted channel is created to a secure non-volatile random access memory (NVRAM). The method determines a power-saving software program to be executed on the SoC by one of the plurality of SoC processing devices. A software image associated with the power-saving software program is loaded to the secure NVRAM. In response to loading the software image to the secure NVRAM, each of the plurality of SoC processing devices except the one executing the software image from the secure NVRAM are powered down.