Static power reduction in caches using deterministic naps

    公开(公告)号:US10191534B2

    公开(公告)日:2019-01-29

    申请号:US15804785

    申请日:2017-11-06

    Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.

    STATIC POWER REDUCTION IN CACHES USING DETERMINISTIC NAPS

    公开(公告)号:US20250013284A1

    公开(公告)日:2025-01-09

    申请号:US18894180

    申请日:2024-09-24

    Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.

    STATIC POWER REDUCTION IN CACHES USING DETERMINISTIC NAPS

    公开(公告)号:US20220091659A1

    公开(公告)日:2022-03-24

    申请号:US17541776

    申请日:2021-12-03

    Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.

    Dynamic power reduction and performance improvement in caches using fast access

    公开(公告)号:US09652397B2

    公开(公告)日:2017-05-16

    申请号:US14694415

    申请日:2015-04-23

    Abstract: With the increasing demand for improved processor performance, memory systems have been growing increasingly larger to keep up with this performance demand. Caches, which dictate the performance of memory systems are often the focus of improved performance in memory systems, and the most common techniques used to increase cache performance are increased size and associativity. Unfortunately, these methods yield increased static and dynamic power consumption. In this invention, a technique is shown that reduces the power consumption in associative caches with some improvement in cache performance. The architecture shown achieves these power savings by reducing the number of ways queried on each cache access, using a simple hash function and no additional storage, while skipping some pipe stages for improved performance. Up to 90% reduction in power consumption with a 4.6% performance improvement was observed.

    STATIC POWER REDUCTION IN CACHES USING DETERMINISTIC NAPS

    公开(公告)号:US20200348747A1

    公开(公告)日:2020-11-05

    申请号:US16933407

    申请日:2020-07-20

    Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.

    Static Power Reduction in Caches Using Deterministic Naps
    8.
    发明申请
    Static Power Reduction in Caches Using Deterministic Naps 审中-公开
    使用确定性缺陷的缓存中的静态功耗降低

    公开(公告)号:US20150310902A1

    公开(公告)日:2015-10-29

    申请号:US14694285

    申请日:2015-04-23

    Abstract: The dNap architecture is able to accurately transition cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.

    Abstract translation: dNap架构能够在访问它们之前将高速缓存行精确地转换为全功率状态。 这确保了由于醒来困倦的线路而没有额外的延迟。 只有由DMC确定要在未来访问的高速缓存线才能全部供电,而其他线路则处于昏昏欲睡的模式。 因此,我们能够显着降低泄漏功率,无需缓存性能下降和最小的硬件开销,特别是在较高的关联性。 高达92%的静态/泄漏功率节省是以最少的硬件开销和无需性能折衷来实现的。

    Dynamic Power Reduction and Performance Improvement in Caches Using Fast Access
    9.
    发明申请
    Dynamic Power Reduction and Performance Improvement in Caches Using Fast Access 有权
    使用快速访问的缓存中的动态功耗降低和性能提升

    公开(公告)号:US20150309930A1

    公开(公告)日:2015-10-29

    申请号:US14694415

    申请日:2015-04-23

    Abstract: With the increasing demand for improved processor performance, memory systems have been growing increasingly larger to keep up with this performance demand. Caches, which dictate the performance of memory systems are often the focus of improved performance in memory systems, and the most common techniques used to increase cache performance are increased size and associativity. Unfortunately, these methods yield increased static and dynamic power consumption. In this invention, a technique is shown that reduces the power consumption in associative caches with some improvement in cache performance. The architecture shown achieves these power savings by reducing the number of ways queried on each cache access, using a simple hash function and no additional storage, while skipping some pipe stages for improved performance. Up to 90% reduction in power consumption with a 4.6% performance improvement was observed.

    Abstract translation: 随着对改进处理器性能的日益增长的需求,内存系统越来越大,以适应这种性能需求。 规定存储器系统性能的高速缓存通常是内存系统性能改进的重点,而用于增加缓存性能的最常用技术是增加大小和关联性。 不幸的是,这些方法产生了静态和动态功耗的增加。 在本发明中,示出了一种减少关联高速缓存中的功耗的技术,同时具有缓存性能的一些改进。 所示的架构通过减少每个缓存访问查询的方式,使用简单的散列函数和无额外的存储,同时跳过一些管道级以提高性能,从而实现了这些功耗。 观察到功耗降低了90%,性能提高了4.6%。

Patent Agency Ranking