-
公开(公告)号:US20180293183A1
公开(公告)日:2018-10-11
申请号:US15482690
申请日:2017-04-07
Applicant: Intel Corporation
Inventor: NIRANJAN L. COORAY , ABHISHEK R. APPU , ALTUG KOKER , JOYDEEP RAY , BALAJI VEMBU , PATTABHIRAMAN K , DAVID PUFFER , DAVID J. COWPERTHWAITE , RAJESH M. SANKARAN , SATYESHWAR SINGH , SAMEER KP , ANKUR N. SHAH , KUN TIAN
IPC: G06F13/16 , G06F13/40 , G06F12/1027 , G06F12/0802
CPC classification number: G06F13/16 , G06F12/0802 , G06F12/1009 , G06F12/1027 , G06F12/1036 , G06F13/4068 , G06F2212/1024 , G06F2212/302 , G06F2212/60 , G06F2212/68
Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
-
公开(公告)号:US20200167221A1
公开(公告)日:2020-05-28
申请号:US16203578
申请日:2018-11-28
Applicant: Intel Corporation
Inventor: BALAJI VEMBU , BRYAN WHITE , ANKUR SHAH , MURALI RAMADOSS , DAVE PUFFER , ALTUG KOKER , ADITYA NAVALE , MAHESH NATU
Abstract: Apparatus and method for scalable error reporting. For example, one embodiment of an apparatus comprises error detection circuitry to detect an error in a component of a first tile within a tile-based hierarchy of a processing device; error classification circuitry to classify the error and record first error data based on the classification; a first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data; and a master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate second accumulated error data and to provide the second accumulated error data to a host executing an application to process the second accumulated error data.
-
3.
公开(公告)号:US20200005516A1
公开(公告)日:2020-01-02
申请号:US16024821
申请日:2018-06-30
Applicant: Intel Corporation
Inventor: MICHAEL APODACA , ANKUR SHAH , BEN ASHBAUGH , BRANDON FLIFLET , HEMA NALLURI , PATTABHIRAMAN K , PETER DOYLE , JOSEPH KOSTON , JAMES VALERIO , MURALI RAMADOSS , ALTUG KOKER , ADITYA NAVALE , PRASOONKUMAR SURTI , BALAJI VEMBU
IPC: G06T15/00
Abstract: Apparatus and method for simultaneous command streamers. For example, one embodiment of an apparatus comprises: a plurality of work element queues to store work elements for a plurality of thread contexts, each work element associated with a context descriptor identifying a context storage region in memory; a plurality of command streamers, each command streamer associated with one of the plurality of work element queues, the command streamers to independently submit instructions for execution as specified by the work elements; a thread dispatcher to evaluate the thread contexts including priority values, to tag each instruction with an execution identifier (ID), and to responsively dispatch each instruction including the execution ID in accordance with the thread context; and a plurality of graphics functional units to independently execute each instruction dispatched by the thread dispatcher and to associate each instruction with a thread context based on its execution ID.
-
公开(公告)号:US20190391937A1
公开(公告)日:2019-12-26
申请号:US16453995
申请日:2019-06-26
Applicant: Intel Corporation
Inventor: NIRANJAN L. COORAY , ABHISHEK R. APPU , ALTUG KOKER , JOYDEEP RAY , BALAJI VEMBU , PATTABHIRAMAN K , DAVID PUFFER , DAVID J. COWPERTHWAITE , RAJESH M. SANKARAN , SATYESHWAR SINGH , SAMEER KP , ANKUR N. SHAH , KUN TIAN
IPC: G06F13/16 , G06F13/40 , G06F12/0802 , G06F12/1036 , G06F12/1027 , G06F12/1009
Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
-
公开(公告)号:US20180300964A1
公开(公告)日:2018-10-18
申请号:US15488914
申请日:2017-04-17
Applicant: Intel Corporation
Inventor: BARATH LAKSHAMANAN , LINDA L. HURD , BEN J. ASHBAUGH , ELMOUSTAPHA OULD-AHMED-VALL , LIWEI MA , JINGYI JIN , JUSTIN E. GOTTSCHLICH , CHANDRASEKARAN SAKTHIVEL , MICHAEL S. STRICKLAND , BRIAN T. LEWIS , LINDSEY KUPER , ALTUG KOKER , ABHISHEK R. APPU , PRASOONKUMAR SURTI , JOYDEEP RAY , BALAJI VEMBU , JAVIER S. TUREK , NAILA FAROOQUI
CPC classification number: G07C5/008 , B60W30/00 , G01C21/34 , G01S19/13 , G05D1/0088 , G05D2201/0213 , G06F9/5027 , G06F2209/509 , G06N20/00 , G08G1/0112 , G08G1/012 , G08G1/052 , H04L43/0852 , H04L67/12 , H04W28/08
Abstract: One embodiment provides for a computing device within an autonomous vehicle, the compute device comprising a wireless network device to enable a wireless data connection with an autonomous vehicle network, a set of multiple processors including a general-purpose processor and a general-purpose graphics processor, the set of multiple processors to execute a compute manager to manage execution of compute workloads associated with the autonomous vehicle, the compute workload associated with autonomous operations of the autonomous vehicle, and offload logic configured to execute on the set of multiple processors, the offload logic to determine to offload one or more of the compute workloads to one or more autonomous vehicles within range of the wireless network device.
-
公开(公告)号:US20180293690A1
公开(公告)日:2018-10-11
申请号:US15482685
申请日:2017-04-07
Applicant: Intel Corporation
Inventor: JOYDEEP RAY , ABHISHEK R. APPU , ALTUG KOKER , BALAJI VEMBU
IPC: G06T1/20 , G06T1/60 , G06F12/0875
CPC classification number: G06T1/20 , G06F12/0811 , G06F12/0815 , G06F12/0831 , G06F12/0875 , G06F12/0888 , G06F2212/1024 , G06F2212/302 , G06F2212/455 , G06F2212/621 , G06T1/60
Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, one embodiment of an apparatus comprises: a processor comprising one or more cores to execute instructions and process data, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory coupled to the GPU, the GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication, for each of a plurality of blocks of data, whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, then the data is to be accessed by the GPU from the GPU memory without necessarily accessing the processor's cache coherence controllers and wherein requests for the data from the processor cores are processed as uncached requests, preventing the data from being cached in the one or more cache levels of the processor.
-
公开(公告)号:US20210201556A1
公开(公告)日:2021-07-01
申请号:US17141431
申请日:2021-01-05
Applicant: Intel Corporation
Inventor: JOYDEEP RAY , ABHISHEK R. APPU , PATTABHIRAMAN K , BALAJI VEMBU , ALTUG KOKER , NIRANJAN L. COORAY , JOSH B. MASTRONARDE
Abstract: An apparatus and method are described for allocating local memories to virtual machines. For example, one embodiment of an apparatus comprises: a command streamer to queue commands from a plurality of virtual machines (VMs) or applications, the commands to be distributed from the command streamer and executed by graphics processing resources of a graphics processing unit (GPU); a tile cache to store graphics data associated with the plurality of VMs or applications as the commands are executed by the graphics processing resources; and tile cache allocation hardware logic to allocate a first portion of the tile cache to a first VM or application and a second portion of the tile cache to a second VM or application; the tile cache allocation hardware logic to further allocate a first region in system memory to store spill-over data when the first portion of the tile cache and/or the second portion of the file cache becomes full.
-
公开(公告)号:US20200082059A1
公开(公告)日:2020-03-12
申请号:US16126060
申请日:2018-09-10
Applicant: Intel Corporation
Inventor: BALAJI VEMBU , VIDHYA KRISHNAN , SANDEEP SODHI , SREEKANTH MAVILA , ALTUG KOKER , ADITYA NAVALE , SCOTT JANUS , Changliang WANG
Abstract: Apparatus and method for scalable content protection. For example, one embodiment of an apparatus comprises: cryptographic management circuitry to securely store one or more keys associated with one or more media apps/applications; a plurality of processing engines, each processing engine comprising circuitry to process media content of the one or more media apps/applications; and a scheduler to schedule processing of the media content by the processing engines; wherein the cryptographic management circuitry is to restore a first cryptographic state including a first key associated with a first media app/application and/or first media content responsive to a request to process the first media content on a first processing engine.
-
公开(公告)号:US20190318550A1
公开(公告)日:2019-10-17
申请号:US16383849
申请日:2019-04-15
Applicant: Intel Corporation
Inventor: BARATH LAKSHAMANAN , LINDA l. HURD , BEN J. ASHBAUGH , ELMOUSTAPHA OULD-AHMED-VALL , LIWEI MA , JINGYI JIN , JUSTIN E. GOTTSCHLICH , CHANDRASEKARAN SAKTHIVEL , MICHAEL S. STRICKLAND , BRIAN T. LEWIS , LINDSEY KUPER , ALTUG KOKER , ABHISHEK R. APPU , PRASOONKUMAR SURTI , JOYDEEP RAY , BALAJI VEMBU , JAVIER S. TUREK , NAILA FAROOQUI
IPC: G07C5/00 , G06F9/50 , H04W28/08 , B60W30/00 , H04L29/08 , G01C21/34 , G05D1/00 , G08G1/01 , G06N20/00
Abstract: One embodiment provides for a computing device within an autonomous vehicle, the compute device comprising a wireless network device to enable a wireless data connection with an autonomous vehicle network, a set of multiple processors including a general-purpose processor and a general-purpose graphics processor, the set of multiple processors to execute a compute manager to manage execution of compute workloads associated with the autonomous vehicle, the compute workload associated with autonomous operations of the autonomous vehicle, and offload logic configured to execute on the set of multiple processors, the offload logic to determine to offload one or more of the compute workloads to one or more autonomous vehicles within range of the wireless network device.
-
公开(公告)号:US20190318446A1
公开(公告)日:2019-10-17
申请号:US16365056
申请日:2019-03-26
Applicant: Intel Corporation
Inventor: JOYDEEP RAY , ABHISHEK R. APPU , ALTUG KOKER , BALAJI VEMBU
IPC: G06T1/20 , G06F12/0815 , G06F12/0831 , G06F12/0888 , G06T1/60 , G06F12/0811 , G06F12/0875
Abstract: An apparatus and method are described for managing data which is biased towards a processor or a GPU. For example, an apparatus comprises a processor comprising one or more cores, one or more cache levels, and cache coherence controllers to maintain coherent data in the one or more cache levels; a graphics processing unit (GPU) to execute graphics instructions and process graphics data, wherein the GPU and processor cores are to share a virtual address space for accessing a system memory; a GPU memory addressable through the virtual address space shared by the processor cores and GPU; and bias management circuitry to store an indication for whether the data has a processor bias or a GPU bias, wherein if the data has a GPU bias, the data is to be accessed by the GPU without necessarily accessing the processor's cache coherence controllers.
-
-
-
-
-
-
-
-
-