NEURAL NETWORK NEAR MEMORY PROCESSING
    1.
    发明公开

    公开(公告)号:US20240104360A1

    公开(公告)日:2024-03-28

    申请号:US18265219

    申请日:2020-12-02

    IPC分类号: G06N3/063 G06N3/04

    CPC分类号: G06N3/063 G06N3/04

    摘要: Near memory processing systems for graph neural network processing can include a central core coupled to one or more memory units. The memory units can include one or more controllers and a plurality of memory devices. The system can be configured for offloading aggregation, concentrate and the like operations from the central core to the controllers of the one or more memory units. The central core can sample the graph neural network and schedule memory accesses for execution by the one or more memory units. The central core can also schedule aggregation, combination or the like operations associated with one or more memory accesses for execution by the controller. The controller can access data in accordance with the data access requests from the central core. One or more computation units of the controller can also execute the aggregation, combination or the like operations associated with one or more memory access. The central core can then execute further aggregation, combination or the like operations or computations of end use applications on the data returned by the controller.

    A CONFIGURABLE PROCESSING ARCHITECTURE
    2.
    发明公开

    公开(公告)号:US20240028554A1

    公开(公告)日:2024-01-25

    申请号:US18027078

    申请日:2020-09-18

    IPC分类号: G06F15/80 G06F15/82

    CPC分类号: G06F15/80 G06F15/825

    摘要: A configurable processing unit including a core processing element and a plurality of assist processing elements can be coupled together by one or more networks. The core processing element can include a large processing logic, large non-volatile memory, input/output interfaces and multiple memory channels. The plurality of assist processing elements can each include smaller processing logic, smaller non-volatile memory and multiple memory channels. One or more bitstreams can be utilized to configure and reconfigure computation resources of the core processing element and memory management of the plurality of assist processing elements.

    PROCESSING ACCELERATOR ARCHITECTURES

    公开(公告)号:US20230047378A1

    公开(公告)日:2023-02-16

    申请号:US17789055

    申请日:2021-01-08

    IPC分类号: G10L15/00 G10L13/00

    摘要: In various embodiments, this application provides an audio information processing method, an audio information processing apparatus, an electronic device, and a storage medium. An audio information processing method in an embodiment includes: obtaining a first audio feature corresponding to audio information; performing, based on an audio feature at a specified moment in the first audio feature and audio features adjacent to the audio feature at the specified moment, an encoding on the audio feature at the specified moment to obtain a second audio feature corresponding to the audio information; obtaining decoded text information corresponding to the audio information; and obtaining, based on the second audio features and the decoded text information, text information corresponding to the audio information. According to this method, fewer parameters are used in the process of obtaining the second audio feature and obtaining, based on the second audio feature and the decoded text information, the text information corresponding to the audio information, thereby reducing computational complexity in the audio information processing process and improving audio information processing efficiency.

    SYSTEM AND METHOD FOR MEMORY MANAGEMENT

    公开(公告)号:US20210248073A1

    公开(公告)日:2021-08-12

    申请号:US16789271

    申请日:2020-02-12

    IPC分类号: G06F12/06

    摘要: Embodiments of the disclosure provide methods and systems for memory management. The method can include: receiving a request for allocating target node data to a memory space, wherein the memory space includes a buffer and an external memory and the target node data comprises property data and structural data and represents a target node of a graph having a plurality of nodes and edges; determining a node degree associated with the target node data; allocating the target node data to the memory space based on the determined node degree.

    MEMORY PRIMING AND INITALIZATION SYSTEMS AND METHODS

    公开(公告)号:US20230245711A1

    公开(公告)日:2023-08-03

    申请号:US17788696

    申请日:2021-01-19

    IPC分类号: G11C29/52 G11C7/20 G11C7/10

    CPC分类号: G11C29/52 G11C7/20 G11C7/1096

    摘要: The present invention provides systems and methods for efficiently and effectively priming and initializing a memory. In one embodiment, a memory controller includes a normal data path and a priming path. The normal data path directs storage operations during a normal memory read/write operation after power startup of a memory chip. The priming path includes a priming module, wherein the priming module directs memory priming operations during a power startup of the memory chip, including forwarding a priming pattern for storage in a write pattern mode register of a memory chip and selection of a memory address in the memory chip for initialization with the priming pattern. The priming pattern includes information corresponding to proper initial data values. The priming pattern can also include proper corresponding error correction code (ECC) values. The priming module can include a priming pattern register that stores the priming pattern.

    DEVICE AND METHOD FOR LOW LATENCY MEMORY ACCESS

    公开(公告)号:US20210248093A1

    公开(公告)日:2021-08-12

    申请号:US16789382

    申请日:2020-02-12

    摘要: Embodiments of the disclosure provide memory devices and methods related to memory accessing. The memory device can include: a plurality of memory blocks, each comprising a plurality of memory cells; a word line communicatively coupled with the plurality of memory blocks and configured to activate memory cells associated with the word line in the plurality of memory blocks; a column selection line communicatively coupled with the plurality of memory blocks and configured to select a column of memory blocks among the plurality of memory blocks; a global data line communicatively coupled with the plurality of memory blocks and configured to transceive data with the selected column of memory blocks; a first switch disposed on a first position on the column selection line; and a second switch disposed on a second position on the global data line, wherein the first switch and the second switch are configured to segment at least one memory block of the plurality of memory blocks from other memory blocks of the plurality of memory blocks.

    DYNAMIC MEMORY COHERENCY BIASING TECHNIQUES

    公开(公告)号:US20220244870A1

    公开(公告)日:2022-08-04

    申请号:US17166975

    申请日:2021-02-03

    IPC分类号: G06F3/06

    摘要: A dynamic bias coherency configuration engine can include control logic, a host threshold register, and device threshold register and a plurality of memory region monitoring units. The memory region monitoring units can include a starting page number register, an ending page number register, a host access register and a device access register. The memory region monitoring units can be utilized by dynamic bias coherency configuration engine to configure corresponding portions of a memory space in a device bias mode or a host bias mode.