-
公开(公告)号:US20220138895A1
公开(公告)日:2022-05-05
申请号:US17430041
申请日:2020-03-14
Applicant: Intel Corporation
Inventor: Vasanth Raganathan , Abhishek R. Appu , Ben Ashbaugh , Peter Doyle , Brandon Fliflet , Arthur Hunter , Brent Insko , Scott Janus , Altug Koker , Aditya Navale , Joydeep Ray , Kamal Sinha , Lakshminarayanan Striramassarma , Prasoonkumar Surti , James Valerio
Abstract: Embodiments are generally directed to compute optimization in graphics processing. An embodiment of an apparatus includes one or more processors including a multi-tile graphics processing unit (GPU) to process data, the multi-tile GPU including multiple processor tiles; and a memory for storage of data for processing, wherein the apparatus is to receive compute work for processing by the GPU, partition the compute work into multiple work units, assign each of multiple work units to one of the processor tiles, and process the compute work using the processor tiles assigned to the work units.
-
公开(公告)号:US20210255957A1
公开(公告)日:2021-08-19
申请号:US17161465
申请日:2021-01-28
Applicant: Intel Corporation
Inventor: Vikranth Vemulapalli , Lakshminarayanan Striramassarma , Mike MacPherson , Aravindh Anantaraman , Ben Ashbaugh , Murali Ramadoss , William B. Sadler , Jonathan Pearce , Scott Janus , Brent Insko , Vasanth Ranganathan , Kamal Sinha , Arthur Hunter, JR. , Prasoonkumar Surti , Nicolas Galoppo von Borries , Joydeep Ray , Abhishek R. Appu , ElMoustapha Ould-Ahmed-Vall , Altug Koker , Sungye Kim , Subramaniam Maiyuran , Valentin Andrei
IPC: G06F12/0862 , G06T1/20 , G06T1/60
Abstract: Embodiments are generally directed to data prefetching for graphics data processing. An embodiment of an apparatus includes one or more processors including one or more graphics processing units (GPUs); and a plurality of caches to provide storage for the one or more GPUs, the plurality of caches including at least an L1 cache and an L3 cache, wherein the apparatus to provide intelligent prefetching of data by a prefetcher of a first GPU of the one or more GPUs including measuring a hit rate for the L1 cache; upon determining that the hit rate for the L1 cache is equal to or greater than a threshold value, limiting a prefetch of data to storage in the L3 cache, and upon determining that the hit rate for the L1 cache is less than a threshold value, allowing the prefetch of data to the L1 cache.
-
公开(公告)号:US10997086B1
公开(公告)日:2021-05-04
申请号:US16807430
申请日:2020-03-03
Applicant: Intel Corporation
Inventor: Joydeep Ray , Altug Koker , Aditya Navale , Ankur Shah , Murali Ramadoss , Ben Ashbaugh , Ronald Silvas
IPC: G06F12/10 , G06F12/1072
Abstract: Systems and methods for providing shared virtual memory addressing support for a host system are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations. A memory management unit (MMU) is coupled to the processing resources. The MMU to support a first virtual address size for managing allocation of non-shared virtual memory and to support a second virtual address size for managing allocation of shared virtual memory that is shared between the graphics processor and a host.
-
公开(公告)号:US10929749B2
公开(公告)日:2021-02-23
申请号:US15494948
申请日:2017-04-24
Applicant: Intel Corporation
Inventor: Narayan Srinivasa , Joydeep Ray , Nicolas C. Galoppo Von Borries , Ben Ashbaugh , Prasoonkumar Surti , Feng Chen , Barath Lakshmanan , Elmoustapha Ould-Ahmed-Vall , Liwei Ma , Linda L. Hurd , Abhishek R. Appu , John C. Weast , Sara S. Baghsorkhi , Justin E. Gottschlich , Chandrasekaran Sakthivel , Farshad Akhbari , Dukhwan Kim , Altug Koker , Nadathur Rajagopalan Satish
Abstract: An apparatus to facilitate optimization of a neural network (NN) is disclosed. The apparatus includes optimization logic to define a NN topology having one or more macro layers, adjust the one or more macro layers to adapt to input and output components of the NN and train the NN based on the one or more macro layers.
-
公开(公告)号:US12210477B2
公开(公告)日:2025-01-28
申请号:US17428530
申请日:2020-03-14
Applicant: Intel Corporation
Inventor: Altug Koker , Joydeep Ray , Ben Ashbaugh , Jonathan Pearce , Abhishek Appu , Vasanth Ranganathan , Lakshminarayanan Striramassarma , Elmoustapha Ould-Ahmed-Vall , Aravindh Anantaraman , Valentin Andrei , Nicolas Galoppo Von Borries , Varghese George , Yoav Harel , Arthur Hunter, Jr. , Brent Insko , Scott Janus , Pattabhiraman K , Mike Macpherson , Subramaniam Maiyuran , Marian Alin Petre , Murali Ramadoss , Shailesh Shah , Kamal Sinha , Prasoonkumar Surti , Vikranth Vemulapalli
IPC: G06F15/78 , G06F7/544 , G06F7/575 , G06F7/58 , G06F9/30 , G06F9/38 , G06F9/50 , G06F12/02 , G06F12/06 , G06F12/0802 , G06F12/0804 , G06F12/0811 , G06F12/0862 , G06F12/0866 , G06F12/0871 , G06F12/0875 , G06F12/0882 , G06F12/0888 , G06F12/0891 , G06F12/0893 , G06F12/0895 , G06F12/0897 , G06F12/1009 , G06F12/128 , G06F15/80 , G06F17/16 , G06F17/18 , G06T1/20 , G06T1/60 , H03M7/46 , G06N3/08 , G06T15/06
Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache coupled to the processing resources. The cache controller is configured to control cache priority by determining whether default settings or an instruction will control cache operations for the cache.
-
公开(公告)号:US12013808B2
公开(公告)日:2024-06-18
申请号:US17429873
申请日:2020-03-14
Applicant: Intel Corporation
Inventor: Altug Koker , Ben Ashbaugh , Scott Janus , Aravindh Anantaraman , Abhishek R. Appu , Niranjan Cooray , Varghese George , Arthur Hunter , Brent E. Insko , Elmoustapha Ould-Ahmed-Vall , Selvakumar Panneer , Vasanth Ranganathan , Joydeep Ray , Kamal Sinha , Lakshminarayanan Striramassarma , Prasoonkumar Surti , Saurabh Tangri
IPC: G06F13/38 , G06F7/544 , G06F7/575 , G06F7/58 , G06F9/30 , G06F9/38 , G06F9/50 , G06F12/02 , G06F12/06 , G06F12/0802 , G06F12/0804 , G06F12/0811 , G06F12/0862 , G06F12/0866 , G06F12/0871 , G06F12/0875 , G06F12/0882 , G06F12/0888 , G06F12/0891 , G06F12/0893 , G06F12/0895 , G06F12/0897 , G06F12/1009 , G06F12/128 , G06F15/78 , G06F15/80 , G06F17/16 , G06F17/18 , G06T1/20 , G06T1/60 , H03M7/46 , G06N3/08 , G06T15/06
CPC classification number: G06F15/7839 , G06F7/5443 , G06F7/575 , G06F7/588 , G06F9/3001 , G06F9/30014 , G06F9/30036 , G06F9/3004 , G06F9/30043 , G06F9/30047 , G06F9/30065 , G06F9/30079 , G06F9/3887 , G06F9/5011 , G06F9/5077 , G06F12/0215 , G06F12/0238 , G06F12/0246 , G06F12/0607 , G06F12/0802 , G06F12/0804 , G06F12/0811 , G06F12/0862 , G06F12/0866 , G06F12/0871 , G06F12/0875 , G06F12/0882 , G06F12/0888 , G06F12/0891 , G06F12/0893 , G06F12/0895 , G06F12/0897 , G06F12/1009 , G06F12/128 , G06F15/8046 , G06F17/16 , G06F17/18 , G06T1/20 , G06T1/60 , H03M7/46 , G06F9/3802 , G06F9/3818 , G06F9/3867 , G06F2212/1008 , G06F2212/1021 , G06F2212/1044 , G06F2212/302 , G06F2212/401 , G06F2212/455 , G06F2212/60 , G06N3/08 , G06T15/06
Abstract: Embodiments are generally directed to a multi-tile architecture for graphics operations. An embodiment of an apparatus includes a multi-tile architecture for graphics operations including a multi-tile graphics processor, the multi-tile processor includes one or more dies; multiple processor tiles installed on the one or more dies; and a structure to interconnect the processor tiles on the one or more dies, wherein the structure to enable communications between processor tiles the processor tiles.
-
公开(公告)号:US20240111590A1
公开(公告)日:2024-04-04
申请号:US17937270
申请日:2022-09-30
Applicant: Intel Corporation
Inventor: Biju George , Vasanth Ranganathan , Fangwen Fu , Ben Ashbaugh , Roland Schulz
CPC classification number: G06F9/5038 , G06T1/20
Abstract: An apparatus to facilitate ordered thread dispatch for thread teams is disclosed. The apparatus includes one or more processors including a graphic processor, the graphics processor including a plurality of processing resources, and wherein the graphics processor is to: allocate a thread team local identifier (ID) for respective threads of a thread team comprising a plurality of hardware threads that are to be executed solely by a processing resource of the plurality of processing resources; and dispatch the respective threads together into the processing resource, the respective threads having the thread team local ID allocated.
-
公开(公告)号:US20240070926A1
公开(公告)日:2024-02-29
申请号:US18466141
申请日:2023-09-13
Applicant: Intel Corporation
Inventor: Joydeep Ray , Ben Ashbaugh , Prasoonkumar Surti , Pradeep Ramani , Rama Harihara , Jerin C. Justin , Jing Huang , Xiaoming Cui , Timothy B. Costa , Ting Gong , Elmoustapha Ould-ahmed-vall , Kumar Balasubramanian , Anil Thomas , Oguz H. Elibol , Jayaram Bobba , Guozhong Zhuang , Bhavani Subramanian , Gokce Keskin , Chandrasekaran Sakthivel , Rajesh Poornachandran
CPC classification number: G06T9/002 , G06F12/023 , G06T15/005 , G06F2212/302 , G06F2212/401
Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
-
公开(公告)号:US11709714B2
公开(公告)日:2023-07-25
申请号:US17686089
申请日:2022-03-03
Applicant: Intel Corporation
Inventor: Ben Ashbaugh , Jonathan Pearce , Murali Ramadoss , Vikranth Vemulapalli , William B. Sadler , Sungye Kim , Marian Alin Petre
IPC: G06F9/50 , G06F9/38 , G06F9/54 , G06F12/0837 , G06F9/48 , G06F9/345 , G06T1/60 , G06F9/30 , G06T15/00 , G06F16/245 , G06T1/20
CPC classification number: G06F9/5027 , G06F9/3455 , G06F9/3851 , G06F9/3877 , G06F9/3885 , G06F9/4881 , G06F9/5033 , G06F9/5066 , G06F9/545 , G06F12/0837 , G06F9/30178 , G06F9/3887 , G06F16/24569 , G06T1/20 , G06T1/60 , G06T15/005
Abstract: Embodiments are generally directed to thread group scheduling for graphics processing. An embodiment of an apparatus includes a plurality of processors including a plurality of graphics processors to process data; a memory; and one or more caches for storage of data for the plurality of graphics processors, wherein the one or more processors are to schedule a plurality of groups of threads for processing by the plurality of graphics processors, the scheduling of the plurality of groups of threads including the plurality of processors to apply a bias for scheduling the plurality of groups of threads according to a cache locality for the one or more caches.
-
公开(公告)号:US20230230289A1
公开(公告)日:2023-07-20
申请号:US18152643
申请日:2023-01-10
Applicant: Intel Corporation
Inventor: Joydeep Ray , Ben Ashbaugh , Prasoonkumar Surti , Pradeep Ramani , Rama Harihara , Jerin C. Justin , Jing Huang , Xiaoming Cui , Timothy B. Costa , Ting Gong , Elmoustapha Ould-ahmed-vall , Kumar Balasubramanian , Anil Thomas , Oguz H. Elibol , Jayaram Bobba , Guozhong Zhuang , Bhavani Subramanian , Gokce Keskin , Chandrasekaran Sakthivel , Rajesh Poornachandran
CPC classification number: G06T9/002 , G06F12/023 , G06T15/005 , G06F2212/401 , G06F2212/302
Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
-
-
-
-
-
-
-
-
-