-
1.
公开(公告)号:US20240020253A1
公开(公告)日:2024-01-18
申请号:US18477787
申请日:2023-09-29
Applicant: Intel Corporation
Inventor: Shruti Sharma , Robert Pawlowski , Fabio Checconi , Jesmin Jahan Tithi
IPC: G06F13/28
CPC classification number: G06F13/28 , G06F2213/28
Abstract: Systems, apparatuses and methods may provide for technology that detects a plurality of sub-instruction requests from a first memory engine in a plurality of memory engines, wherein the plurality of sub-instruction requests are associated with a direct memory access (DMA) data type conversion request from a first pipeline, wherein each sub-instruction request corresponds to a data element in the DMA data type conversion request, and wherein the first memory engine is to correspond to the first pipeline, decodes the plurality of sub-instruction requests to identify one or more arguments, loads a source array from a dynamic random access memory (DRAM) in a plurality of DRAMs, wherein the operation engine is to correspond to the DRAM, and conducts a conversion of the source array from a first data type to a second data type in accordance with the one or more arguments.
-
2.
公开(公告)号:US20230333998A1
公开(公告)日:2023-10-19
申请号:US18312752
申请日:2023-05-05
Applicant: Intel Corporation
Inventor: Shruti Sharma , Robert Pawlowski , Fabio Checconi , Jesmin Jahan Tithi
IPC: G06F13/28
CPC classification number: G06F13/28
Abstract: Systems, apparatuses and methods may provide for technology that includes a plurality of memory engines corresponding to a plurality of pipelines, wherein each memory engine in the plurality of memory engines is adjacent to a pipeline in the plurality of pipelines, and wherein a first memory engine is to request one or more direct memory access (DMA) operations associated with a first pipeline, and a plurality of operation engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each operation engine in the plurality of operation engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the plurality of operation engines is to conduct the one or more DMA operations based on one or more bitmaps.
-
公开(公告)号:US20230325185A1
公开(公告)日:2023-10-12
申请号:US18194252
申请日:2023-03-31
Applicant: Intel Corporation
Inventor: Jesmin Jahan Tithi , Fabio Checconi , Ahmed Helal , Fabrizio Petrini
CPC classification number: G06F9/3001 , G06F12/08 , G06F2213/28
Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed for performance of sparse matrix time dense matrix operations. Example instructions cause programmable circuitry to control execution of the sparse matrix times dense matrix operation using a sparse matrix and a dense matrix stored in memory, and transmit a plurality of instructions to execute the sparse matrix times dense matrix operation to DMA engine circuitry, the plurality of instructions to cause DMA engine circuitry to create an output matrix in the memory, the creation of the output matrix in the memory performed without the programmable circuitry computing the output matrix.
-
公开(公告)号:US20240241645A1
公开(公告)日:2024-07-18
申请号:US18621437
申请日:2024-03-29
Applicant: Intel Corporation
Inventor: Robert Pawlowski , Shruti Sharma , Fabio Checconi , Sriram Aananthakrishnan , Jesmin Jahan Tithi , Jordi Wolfson-Pou , Joshua B. Fryman
IPC: G06F3/06
CPC classification number: G06F3/0613 , G06F3/0656 , G06F3/0673
Abstract: Systems, apparatuses and methods may provide for technology that includes a plurality of hash management buffers corresponding to a plurality of pipelines, wherein each hash management buffer in the plurality of hash management buffers is adjacent to a pipeline in the plurality of pipelines, and wherein a first hash management buffer is to issue one or more hash packets associated with one or more hash operations on a hash table. The technology may also include a plurality of hash engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each hash engine in the plurality of hash engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the hash engines is to initialize a target memory destination associated with the hash table and conduct the one or more hash operations in response to the one or more hash packets.
-
公开(公告)号:US20240069921A1
公开(公告)日:2024-02-29
申请号:US18477884
申请日:2023-09-29
Applicant: Intel Corporation
Inventor: Scott Cline , Robert Pawlowski , Joshua Fryman , Ivan Ganev , Vincent Cave , Sebastian Szkoda , Fabio Checconi
CPC classification number: G06F9/3885 , G06F9/30036
Abstract: Technology described herein provides a dynamically reconfigurable processing core. The technology includes a plurality of pipelines comprising a core, where the core is reconfigurable into one of a plurality of core modes, a core network to provide inter-pipeline connections for the pipelines, and logic to receive a morph instruction including a target core mode from an application running on the core, determine a present core state for the core, and morph, based on the present core state, the core to the target core mode. In embodiments, to morph the core, the logic is to select, based on the target core mode, which inter-pipeline connections are active, where each pipeline includes at least one multiplexor via which the inter-pipeline connections are selected to be active. In embodiments, to morph the core, the logic is further to select, based on the target core mode, which memory access paths are active.
-
6.
公开(公告)号:US20230315451A1
公开(公告)日:2023-10-05
申请号:US18326623
申请日:2023-05-31
Applicant: Intel Corporation
Inventor: Shruti Sharma , Robert Pawlowski , Fabio Checconi , Jesmin Jahan Tithi
CPC classification number: G06F9/30043 , G06F9/30079 , G06F13/28
Abstract: Systems, apparatuses and methods may provide for technology that detects, by an operation engine, a plurality of sub-instruction requests from a first memory engine in a plurality of memory engines, wherein the plurality of sub-instruction requests are associated with a direct memory access (DMA) bitmap manipulation request from a first pipeline, wherein each sub-instruction request corresponds to a data element in the DMA bitmap manipulation request, and wherein the first memory engine is to correspond to the first pipeline. The technology also detects, by the operation engine, one or more arguments in the plurality of sub-instruction requests, sends, by the operation engine, one or more load requests to a DRAM in the plurality of DRAMs in accordance with the one or more arguments, and sends, by the operation engine, one or more store requests to the DRAM in accordance with the one or more arguments, wherein the operation engine is to correspond to the DRAM.
-
-
-
-
-