-
公开(公告)号:US11500777B2
公开(公告)日:2022-11-15
申请号:US16775479
申请日:2020-01-29
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Ramakrishnan Venkatasubramanian
IPC: G06F12/0862 , G06F12/02 , G06F12/1027 , G06F12/0871 , G06F9/38
Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
-
公开(公告)号:US11221665B2
公开(公告)日:2022-01-11
申请号:US16933407
申请日:2020-07-20
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Mehrdad Nourani
IPC: G06F1/3234 , G06F12/0811 , G06F12/0895 , G06F12/0846
Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
-
公开(公告)号:US10725527B2
公开(公告)日:2020-07-28
申请号:US16253363
申请日:2019-01-22
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Mehrdad Nourani
IPC: G06F1/3234 , G06F12/0811 , G06F12/0895 , G06F12/0846
Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
-
公开(公告)号:US20190179759A1
公开(公告)日:2019-06-13
申请号:US16279721
申请日:2019-02-19
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Ramakrishnan Venkatasubramanian
IPC: G06F12/0862 , G06F12/1027 , G06F12/02 , G06F9/38
Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
-
公开(公告)号:US09811148B2
公开(公告)日:2017-11-07
申请号:US15431922
申请日:2017-02-14
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Mehrdad Nourani
IPC: G06F1/32 , G06F12/08 , G06F12/0846 , G06F12/0811
CPC classification number: G06F1/3275 , G06F12/0811 , G06F12/0848 , G06F12/0895 , G06F2212/1028 , G06F2212/282 , G06F2212/283 , Y02D10/13
Abstract: The dNap architecture is able to accurately transition cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
-
公开(公告)号:US20170153691A1
公开(公告)日:2017-06-01
申请号:US15431922
申请日:2017-02-14
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Mehrdad Nourani
IPC: G06F1/32 , G06F12/0811 , G06F12/0846
CPC classification number: G06F1/3275 , G06F12/0811 , G06F12/0848 , G06F12/0895 , G06F2212/1028 , G06F2212/282 , G06F2212/283 , Y02D10/13
Abstract: The dNap architecture is able to accurately transition cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/Leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.
-
公开(公告)号:US09652392B2
公开(公告)日:2017-05-16
申请号:US15270018
申请日:2016-09-20
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Ramakrishnan Venkatasubramanian , Oluleye Olorode , Hung Ong
IPC: G06F12/08 , G06F9/38 , G06F12/0817 , G06F12/0811 , G06F12/0804 , G06F12/0875 , G06F12/0897
CPC classification number: G06F12/0828 , G06F9/30036 , G06F9/30072 , G06F9/30094 , G06F9/30112 , G06F9/3013 , G06F9/38 , G06F9/3836 , G06F9/3838 , G06F9/3853 , G06F9/3855 , G06F9/3873 , G06F12/0804 , G06F12/0811 , G06F12/0875 , G06F12/0897 , G06F2212/452 , G06F2212/60 , G06F2212/621
Abstract: A method is shown that eliminates the need for a dedicated reorder buffer register bank or memory space in a multi level cache system. As data requests from the L2 cache may be returned out of order, the L1 cache uses it's cache memory to buffer the out of order data and provides the data to the requesting processor in the correct order from the buffer.
-
公开(公告)号:US12124374B2
公开(公告)日:2024-10-22
申请号:US17987482
申请日:2022-11-15
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Ramakrishnan Venkatasubramanian
IPC: G06F12/0862 , G06F9/38 , G06F12/02 , G06F12/0871 , G06F12/1027
CPC classification number: G06F12/0862 , G06F9/3806 , G06F9/3838 , G06F12/0215 , G06F12/0871 , G06F12/1027 , G06F2212/6022
Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
-
公开(公告)号:US20230078414A1
公开(公告)日:2023-03-16
申请号:US17987482
申请日:2022-11-15
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Ramakrishnan Venkatasubramanian
IPC: G06F12/0862 , G06F12/02 , G06F9/38 , G06F12/1027 , G06F12/0871
Abstract: Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
-
公开(公告)号:US20230004498A1
公开(公告)日:2023-01-05
申请号:US17940070
申请日:2022-09-08
Applicant: TEXAS INSTRUMENTS INCORPORATED
Inventor: Oluleye Olorode , Ramakrishnan Venkatasubramanian , Hung Ong
IPC: G06F12/0862 , G06F12/0875 , G06F9/30 , G06F9/38 , G06F12/0811 , G06F12/0815
Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.
-
-
-
-
-
-
-
-
-