ENERGY-EFFICIENT CORE VOLTAGE SELECTION APPARATUS AND METHOD

    公开(公告)号:US20220058029A1

    公开(公告)日:2022-02-24

    申请号:US17131547

    申请日:2020-12-22

    Abstract: A processor core energy-efficiency core ranking scheme akin to a favored core in a multi-core processor system. The favored core is the energy-efficient core that allows an SoC to use the core with the lowest Vmin for energy-efficiency. Such Vmin values may be fused in appropriate registers or stored in NVM during HVM. An OS scheduler achieves optimal energy performance using the core ranking information to schedule certain applications on the core with lowest Vmin. A bootstrap flow identifies a bootstrap processor core (BSP) as the most energy efficiency core of the SoC and assigns that core the lowest APIC ID value according to the lowest Vmin. Upon reading the fuses or NVM, the microcode/BIOS calculates and ranks the cores. As such, microcode/BIOS calculates and ranks core APIC IDs based on efficiency around LFM frequencies. Based on the calculated and ranked cores, the microcode or BIOS transfers BSP ownership to the most efficiency core.

    Apparatus and method to dynamically expand associativity of a cache memory
    4.
    发明授权
    Apparatus and method to dynamically expand associativity of a cache memory 有权
    动态扩展高速缓冲存储器的关联性的装置和方法

    公开(公告)号:US09514047B2

    公开(公告)日:2016-12-06

    申请号:US14573811

    申请日:2014-12-17

    Abstract: In an embodiment, a processor includes at least one core, a cache memory, and a cache controller. Responsive to a request to store an address of a data entry into the cache memory, the cache controller is to determine whether an initial cache set of the cache memory and corresponding to the address has available capacity to store the address. Responsive to unavailability of capacity in the initial cache set, the cache controller is to generate a first alternate address associated with the data entry and to determine whether a first cache set corresponding to the first alternate address has available capacity to store the alternate address and if so to store the first alternate address in the first cache set. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括至少一个核心,高速缓存存储器和高速缓存控制器。 响应于将数据条目的地址存储到高速缓冲存储器中的请求,高速缓存控制器将确定高速缓冲存储器的初始高速缓存集是否具有可用于存储该地址的容量。 响应于初始高速缓存集中的容量不可用,高速缓存控制器将生成与数据条目相关联的第一备用地址,并且确定对应于第一备用地址的第一高速缓存集是否具有存储备用地址的可用容量,以及如果 所以要将第一个备用地址存储在第一个缓存集中。 描述和要求保护其他实施例。

    Isochronous agent data pinning in a multi-level memory system
    5.
    发明授权
    Isochronous agent data pinning in a multi-level memory system 有权
    多级存储器系统中的同步代理数据固定

    公开(公告)号:US09542336B2

    公开(公告)日:2017-01-10

    申请号:US14133097

    申请日:2013-12-18

    CPC classification number: G06F12/126

    Abstract: A processing device comprises an instruction execution unit, a memory agent and pinning logic to pin memory pages in a multi-level memory system upon request by the memory agent. The pinning logic includes an agent interface module to receive, from the memory agent, a pin request indicating a first memory page in the multi-level memory system, the multi-level memory system comprising a near memory and a far memory. The pinning logic further includes a memory interface module to retrieve the first memory page from the far memory and write the first memory page to the near memory. In addition, the pinning logic also includes a descriptor table management module to mark the first memory page as pinned in the near memory, wherein marking the first memory page as pinned comprises setting a pinning bit corresponding to the first memory page in a cache descriptor table and to prevent the first memory page from being evicted from the near memory when the first memory page is marked as pinned.

    Abstract translation: 处理设备包括指令执行单元,存储器代理和钉住逻辑,以在存储器请求时针对多级存储器系统中的存储器页面进行引脚。 钉扎逻辑包括代理接口模块,用于从存储器代理接收指示多级存储器系统中的第一存储器页的引脚请求,所述多级存储器系统包括近存储器和远存储器。 钉扎逻辑还包括存储器接口模块,用于从远端存储器检索第一存储器页面,并将第一存储器页面写入近端存储器。 此外,钉扎逻辑还包括描述符表管理模块,用于将第一存储器页标记为固定在近存储器中,其中将第一存储器页标记为固定包括将对应于第一存储器页的锁存位设置在高速缓存描述符表中 并且当第一存储器页面被标记为被固定时,防止第一存储器页被从近存储器逐出。

    GUARANTEED QUALITY OF SERVICE IN SYSTEM-ON-A-CHIP UNCORE FABRIC
    6.
    发明申请
    GUARANTEED QUALITY OF SERVICE IN SYSTEM-ON-A-CHIP UNCORE FABRIC 审中-公开
    系统级芯片中心质量保证质量保证

    公开(公告)号:US20160188529A1

    公开(公告)日:2016-06-30

    申请号:US14583142

    申请日:2014-12-25

    Abstract: In an example, a control system may include a system-on-a-chip (SoC), including one processor for real-time operation to manage devices in the control system, and another processor configured to execute auxiliary functions such as a user interface for the control system. The first core and second core may share memory such as dynamic random access memory (DRAM), and may also share an uncore fabric configured to communicatively couple the processors to one or more peripheral devices. The first core may require a guaranteed quality of service (QoS) to memory and/or peripherals. The uncore fabric may be divided into a first “real-time” virtual channel designated for traffic from the first processor, and a second “auxiliary” virtual channel designated for traffic from the second processor. The uncore fabric may apply a suitable selection or weighting algorithm to the virtual channels to guarantee the QoS.

    Abstract translation: 在一个示例中,控制系统可以包括片上系统(SoC),其包括用于实时操作的一个处理器来管理控制系统中的设备,以及另一处理器,其被配置为执行辅助功能,例如用户接口 用于控制系统。 第一核心和第二核心可以共享诸如动态随机存取存储器(DRAM)的存储器,并且还可以共享被配置为将处理器通信地耦合到一个或多个外围设备的非空心结构。 第一个核心可能需要对存储器和/或外设的有保证的服务质量(QoS)。 非空心结构可以被划分为从第一处理器指定用于业务的第一“实时”虚拟通道和被指定用于来自第二处理器的业务的第二“辅助”虚拟通道。 不织布可以对虚拟信道应用合适的选择或加权算法来保证QoS。

    Apparatus and Method to Dynamically Expand Associativity of A Cache Memory
    7.
    发明申请
    Apparatus and Method to Dynamically Expand Associativity of A Cache Memory 有权
    动态扩展高速缓存存储器相关性的装置和方法

    公开(公告)号:US20160179666A1

    公开(公告)日:2016-06-23

    申请号:US14573811

    申请日:2014-12-17

    Abstract: In an embodiment, a processor includes at least one core, a cache memory, and a cache controller. Responsive to a request to store an address of a data entry into the cache memory, the cache controller is to determine whether an initial cache set of the cache memory and corresponding to the address has available capacity to store the address. Responsive to unavailability of capacity in the initial cache set, the cache controller is to generate a first alternate address associated with the data entry and to determine whether a first cache set corresponding to the first alternate address has available capacity to store the alternate address and if so to store the first alternate address in the first cache set. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括至少一个核心,高速缓存存储器和高速缓存控制器。 响应于将数据条目的地址存储到高速缓冲存储器中的请求,高速缓存控制器将确定高速缓冲存储器的初始高速缓存集是否具有可用于存储该地址的容量。 响应于初始高速缓存集中的容量不可用,高速缓存控制器将生成与数据条目相关联的第一备用地址,并且确定对应于第一备用地址的第一高速缓存集是否具有存储备用地址的可用容量,以及如果 所以要将第一个备用地址存储在第一个缓存集中。 描述和要求保护其他实施例。

Patent Agency Ranking