Array broadcast and reduction systems and methods

    公开(公告)号:US10983793B2

    公开(公告)日:2021-04-20

    申请号:US16369846

    申请日:2019-03-29

    申请人: INTEL CORPORATION

    摘要: The present disclosure is directed to systems and methods of performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry. The DMA control circuitry executes a modified instruction set architecture (ISA) that facilitates the broadcast distribution of data to a plurality of destination addresses in system memory circuitry. The broadcast instruction may include broadcast of a single data value to each destination address. The broadcast instruction may include broadcast of a data array to each destination address. The DMA control circuitry may also execute a reduction instruction that facilitates the retrieval of data from a plurality of source addresses in system memory and performing one or more operations using the retrieved data. Since the DMA control circuitry, rather than the processor circuitry performs the broadcast and reduction operations, system speed and efficiency is beneficially enhanced.

    INSTRUCTION SET ARCHITECTURE SUPPORT FOR DATA TYPE CONVERSION IN NEAR-MEMORY DMA OPERATIONS

    公开(公告)号:US20240020253A1

    公开(公告)日:2024-01-18

    申请号:US18477787

    申请日:2023-09-29

    申请人: Intel Corporation

    IPC分类号: G06F13/28

    CPC分类号: G06F13/28 G06F2213/28

    摘要: Systems, apparatuses and methods may provide for technology that detects a plurality of sub-instruction requests from a first memory engine in a plurality of memory engines, wherein the plurality of sub-instruction requests are associated with a direct memory access (DMA) data type conversion request from a first pipeline, wherein each sub-instruction request corresponds to a data element in the DMA data type conversion request, and wherein the first memory engine is to correspond to the first pipeline, decodes the plurality of sub-instruction requests to identify one or more arguments, loads a source array from a dynamic random access memory (DRAM) in a plurality of DRAMs, wherein the operation engine is to correspond to the DRAM, and conducts a conversion of the source array from a first data type to a second data type in accordance with the one or more arguments.

    INSTRUCTION SET ARCHITECTURE SUPPORT FOR CONDITIONAL DIRECT MEMORY ACCESS DATA MOVEMENT OPERATIONS

    公开(公告)号:US20230333998A1

    公开(公告)日:2023-10-19

    申请号:US18312752

    申请日:2023-05-05

    申请人: Intel Corporation

    IPC分类号: G06F13/28

    CPC分类号: G06F13/28

    摘要: Systems, apparatuses and methods may provide for technology that includes a plurality of memory engines corresponding to a plurality of pipelines, wherein each memory engine in the plurality of memory engines is adjacent to a pipeline in the plurality of pipelines, and wherein a first memory engine is to request one or more direct memory access (DMA) operations associated with a first pipeline, and a plurality of operation engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each operation engine in the plurality of operation engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the plurality of operation engines is to conduct the one or more DMA operations based on one or more bitmaps.

    Structures and operations of integrated circuits having network of configurable switches

    公开(公告)号:US10476492B2

    公开(公告)日:2019-11-12

    申请号:US16201915

    申请日:2018-11-27

    申请人: Intel Corporation

    IPC分类号: H03K17/00 G11C7/10 H03K19/173

    摘要: Embodiments herein may present an integrated circuit including a switch, where the switch together with other switches forms a network of switches to perform a sequence of operations according to a structure of a collective tree. The switch includes a first number of input ports, a second number of output ports, a configurable crossbar to selectively couple the first number of input ports to the second number of output ports, and a computation engine coupled to the first number of input ports, the second number of output ports, and the crossbar. The computation engine of the switch performs an operation corresponding to an operation represented by a node of the collective tree. The switch further includes one or more registers to selectively configure the first number of input ports and the configurable crossbar. Other embodiments may be described and/or claimed.

    INSTRUCTION SET ARCHITECTURE AND HARDWARE SUPPORT FOR HASH OPERATIONS

    公开(公告)号:US20240241645A1

    公开(公告)日:2024-07-18

    申请号:US18621437

    申请日:2024-03-29

    申请人: Intel Corporation

    IPC分类号: G06F3/06

    摘要: Systems, apparatuses and methods may provide for technology that includes a plurality of hash management buffers corresponding to a plurality of pipelines, wherein each hash management buffer in the plurality of hash management buffers is adjacent to a pipeline in the plurality of pipelines, and wherein a first hash management buffer is to issue one or more hash packets associated with one or more hash operations on a hash table. The technology may also include a plurality of hash engines corresponding to a plurality of dynamic random access memories (DRAMs), wherein each hash engine in the plurality of hash engines is adjacent to a DRAM in the plurality of DRAMs, and wherein one or more of the hash engines is to initialize a target memory destination associated with the hash table and conduct the one or more hash operations in response to the one or more hash packets.