Array broadcast and reduction systems and methods

    公开(公告)号:US10983793B2

    公开(公告)日:2021-04-20

    申请号:US16369846

    申请日:2019-03-29

    Abstract: The present disclosure is directed to systems and methods of performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry. The DMA control circuitry executes a modified instruction set architecture (ISA) that facilitates the broadcast distribution of data to a plurality of destination addresses in system memory circuitry. The broadcast instruction may include broadcast of a single data value to each destination address. The broadcast instruction may include broadcast of a data array to each destination address. The DMA control circuitry may also execute a reduction instruction that facilitates the retrieval of data from a plurality of source addresses in system memory and performing one or more operations using the retrieved data. Since the DMA control circuitry, rather than the processor circuitry performs the broadcast and reduction operations, system speed and efficiency is beneficially enhanced.

    CACHE SUPPORT FOR INDIRECT LOADS AND INDIRECT STORES IN GRAPH APPLICATIONS

    公开(公告)号:US20220413855A1

    公开(公告)日:2022-12-29

    申请号:US17359305

    申请日:2021-06-25

    Abstract: Techniques for operating on an indirect memory access instruction, where the instruction accesses a memory location via at least one indirect address. A pipeline processes the instruction and a memory operation engine generates a first access to the at least one indirect address and a second access to a target address determined by the at least one indirect address. A cache memory used with the pipeline and the memory operation engine caches pointers. In response to a cache hit when executing the indirect memory access instruction, operations dereference a pointer to obtain the at least one indirect address, not set a cache bit, and return data for the instruction without storing the data in the cache memory; and in response to a cache miss, operations set the cache bit, obtain, and store a cache line for a missed pointer, and return data without storing the data in the cache memory.

    ARRAY BROADCAST AND REDUCTION SYSTEMS AND METHODS

    公开(公告)号:US20200310795A1

    公开(公告)日:2020-10-01

    申请号:US16369846

    申请日:2019-03-29

    Abstract: The present disclosure is directed to systems and methods of performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry. The DMA control circuitry executes a modified instruction set architecture (ISA) that facilitates the broadcast distribution of data to a plurality of destination addresses in system memory circuitry. The broadcast instruction may include broadcast of a single data value to each destination address. The broadcast instruction may include broadcast of a data array to each destination address. The DMA control circuitry may also execute a reduction instruction that facilitates the retrieval of data from a plurality of source addresses in system memory and performing one or more operations using the retrieved data. Since the DMA control circuitry, rather than the processor circuitry performs the broadcast and reduction operations, system speed and efficiency is beneficially enhanced.

    CROSS-DOMAIN SOLUTION FOR A RADIO ACCESS NETWORK

    公开(公告)号:US20250133129A1

    公开(公告)日:2025-04-24

    申请号:US19002995

    申请日:2024-12-27

    Abstract: A cross-domain device includes a first interface to couple to a first device and a second interface to couple to a second device, where the first device is to implement a first component in a radio access network (RAN) system in a first computing domain, and the second device is to implement a second component in the RAN system in a second computing domain. The first component is to interface within the second component in a RAN processing pipeline. The cross-domain device further comprises hardware to implement a communication channel between the first device and the second device to pass data from the first component to the second component, where the communication channel enforces isolation of the first computing domain from the second computing domain.

    Cache support for indirect loads and indirect stores in graph applications

    公开(公告)号:US12204901B2

    公开(公告)日:2025-01-21

    申请号:US17359305

    申请日:2021-06-25

    Abstract: Techniques for operating on an indirect memory access instruction, where the instruction accesses a memory location via at least one indirect address. A pipeline processes the instruction and a memory operation engine generates a first access to the at least one indirect address and a second access to a target address determined by the at least one indirect address. A cache memory used with the pipeline and the memory operation engine caches pointers. In response to a cache hit when executing the indirect memory access instruction, operations dereference a pointer to obtain the at least one indirect address, not set a cache bit, and return data for the instruction without storing the data in the cache memory; and in response to a cache miss, operations set the cache bit, obtain, and store a cache line for a missed pointer, and return data without storing the data in the cache memory.

    METHODS AND APPARATUS FOR SYSTEM FIREWALLS
    9.
    发明公开

    公开(公告)号:US20240020428A1

    公开(公告)日:2024-01-18

    申请号:US18476026

    申请日:2023-09-27

    CPC classification number: G06F21/85 G06F21/71 G06F21/577 G06F2221/034

    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to generate and manage a firewall policy. An example includes interface circuitry, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to determine whether an operation is allowed to pass between a first component on a system-on-chip (SoC) and a second component on the SoC, detect an interconnect between the first component on the SoC and the second component on the SoC, cause the interconnect to filter the operation based on the determination of whether the operation is allowed to pass between the first component and the second component, and transmit a request to filter the operation based on the determination of whether the operation is allowed to pass between the first component and the second component.

    LOW OVERHEAD ERROR CORRECTION CODE
    10.
    发明申请

    公开(公告)号:US20220229723A1

    公开(公告)日:2022-07-21

    申请号:US17711646

    申请日:2022-04-01

    Abstract: Memory requests are protected by encoding memory requests to include error correction codes. A subset of bits in a memory request are compared to a pre-defined pattern to determine whether the subset of bits matches a pre-defined pattern, where a match indicates that a compression can be applied to the memory request. The error correction code is generated for the memory request and the memory request is encoded to remove the subset of bits, add the error correction code, and add at least one metadata bit to the memory request to generate a protected version of the memory request, where the at least one metadata bit identifies whether the compression was applied to the memory request.

Patent Agency Ranking