Abstract:
Embodiments include methods, systems, and computer storage devices directed to identifying that a trusted boot mode (TBM) control bit is set in an input/output memory management unit (IOMMU) and configuring the IOMMU to block a DMA request received by the IOMMU from a peripheral in response to the identifying.
Abstract:
A method of managing peripherals is performed in a device coupled to a processor in a computer system. In the method, information associated with I/O activity for one or more peripherals is recorded in a first segment of a log. A second segment of the log is identified based on a next-segment pointer associated with the first segment of the log. In response to detecting a lack of available capacity in the first segment of the log, information associated with further I/O activity for the one or more peripherals is recorded in the second segment of the log.
Abstract:
A computer system is provided for preventing peripheral devices and/or processor cores from accessing restricted portions of system memory. For example, the computer system can include a host bridge, system memory coupled to the host bridge via a first access bus, a security processor coupled to the host bridge via a memory access bus that allows the security processor to access system memory and to access the peripheral device, and a security processor memory management unit (SPMMU) coupled between the peripheral device and the host bridge. The security processor is configured to program the SPMMU via the memory access bus to specify a first restricted range of physical addresses in the system memory that the peripheral device is not permitted to access. The SPMMU can then process access requests from the peripheral device and deny access requests that are determined to be within the first restricted range.
Abstract:
Methods and apparatuses are provided for avoiding cold translation lookaside buffer (TLB) misses in a computer system. A typical system is configured as a heterogeneous computing system having at least one central processing unit (CPU) and one or more graphic processing units (GPUs) that share a common memory address space. Each processing unit (CPU and GPU) has an independent TLB. When offloading a task from a particular CPU to a particular GPU, translation information is sent along with the task assignment. The translation information allows the GPU to load the address translation data into the TLB associated with the one or more GPUs prior to executing the task. Preloading the TLB of the GPUs reduces or avoids cold TLB misses that could otherwise occur without the benefits offered by the present disclosure.
Abstract:
A system is provided that includes an interposer having memory controller circuitry embedded therein. The interposer includes conductive vias that are embedded within and that extend through the interposer. The memory controller circuitry can be coupled to some of the conductive vias. In some implementations, other ones of the conductive vias are configured to be coupled to a processor and a memory module that can be mounted along a surface of the interposer. Conductive links are disposed on a surface of the interposer to couple the processor and the memory module to the memory controller circuitry.