-
1.
公开(公告)号:US20200050920A1
公开(公告)日:2020-02-13
申请号:US16514078
申请日:2019-07-17
Applicant: NVIDIA Corporation
Inventor: Sachin IDGUNJI , Michael SIU , Alex GU , James REILLEY , Manan PATEL , Raj SELVANESAN , Ewa KUBALSKA
IPC: G06N3/04 , G06N3/08 , G06F1/3206 , G06F9/30 , G06F9/38
Abstract: An integrated circuit such as, for example a graphics processing unit (GPU), includes a dynamic power controller for adjusting operating voltage and/or frequency. The controller may receive current power used by the integrated circuit and a predicted power determined based on instructions pending in a plurality of processors. The controller determines adjustments that need to be made to the operating voltage and/or frequency to minimize the difference between the current power and the predicted power. An in-system reinforced learning mechanism is included to self-tune parameters of the controller.
-
公开(公告)号:US20180322078A1
公开(公告)日:2018-11-08
申请号:US15716461
申请日:2017-09-26
Applicant: NVIDIA Corporation
Inventor: Xiaogang QIU , Ronny KRASHINSKY , Steven HEINRICH , Shirish GADRE , John EDMONDSON , Jack CHOQUETTE , Mark GEBHART , Ramesh JANDHYALA , Poornachandra RAO , Omkar PARANJAPE , Michael SIU
IPC: G06F13/28 , G06F12/0811 , G06F12/0891 , G06F12/084
Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.
-
公开(公告)号:US20180322077A1
公开(公告)日:2018-11-08
申请号:US15587213
申请日:2017-05-04
Applicant: NVIDIA Corporation
Inventor: Xiaogang QIU , Ronny KRASHINSKY , Steven HEINRICH , Shirish GADRE , John EDMONDSON , Jack CHOQUETTE , Mark GEBHART , Ramesh JANDHYALA , Poornachandra RAO , Omkar PARANJAPE , Michael SIU
IPC: G06F13/28 , G06F12/0891 , G06F12/0811 , G06F12/084
Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.
-
-