Extend GPU/CPU coherency to multi-GPU cores
Abstract:
In an example, an apparatus comprises a plurality of processing unit cores, a plurality of cache memory modules associated with the plurality of processing unit cores, and a machine learning model communicatively coupled to the plurality of processing unit cores, wherein the plurality of cache memory modules share cache coherency data with the machine learning model. Other embodiments are also disclosed and claimed.
Public/Granted literature
Information query
Patent Agency Ranking
0/0