Distributed machine learning with privacy protection

    公开(公告)号:US11755884B2

    公开(公告)日:2023-09-12

    申请号:US16545813

    申请日:2019-08-20

    CPC classification number: G06N3/045 G06N3/048 G06N20/20

    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, changes to local versions of the ANN can be combined with a master version of the ANN. In the system, a first device can include memory that can store the master version, a second device can include memory that can store a local version of the ANN, and there can be many devices that store local versions of the ANN. The second device (or any other device of the system hosting a local version) can include a processor that can train the local version, and a transceiver that can transmit changes to the local version generated from the training. The first device can include a transceiver that can receive the changes to a local version, and a processing device that can combine the received changes with the master version.

    VOLATILE MEMORY TO NON-VOLATILE MEMORY INTERFACE FOR POWER MANAGEMENT

    公开(公告)号:US20230046808A1

    公开(公告)日:2023-02-16

    申请号:US17975364

    申请日:2022-10-27

    Abstract: Systems, methods, and apparatus related to a memory system that manages an interface for a volatile memory device and a non-volatile memory device to control memory system power. In one approach, a controller evaluates a demand on memory performance. If the demand of a current computation task needed by the host is high, a DRAM device is powered-up to meet the demand. Otherwise, if the non-volatile memory device is adequate to meet the demand, the DRAM memory is partially or fully-powered down to save power. In another approach, a task performed for a host device uses one or more resources of a first memory device (e.g., DRAM). A performance capability of a second memory device (e.g., NVRAM) is determined. A controller of the memory system determines whether the performance capability of the second memory device is adequate to service the task. In response to determining that the performance capability is adequate, the controller changes a mode of operation of the memory system so that one or more resources of the second memory device are used to service the task.

    Distributed computing based on memory as a service

    公开(公告)号:US11481334B2

    公开(公告)日:2022-10-25

    申请号:US17319002

    申请日:2021-05-12

    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.

    ACCELERATOR CHIP CONNECTING A SYSTEM ON A CHIP AND A MEMORY CHIP

    公开(公告)号:US20210081353A1

    公开(公告)日:2021-03-18

    申请号:US16573795

    申请日:2019-09-17

    Abstract: An accelerator chip, e.g., an artificial intelligence (AI) accelerator chip, that can connect a system on a chip (SoC) and a memory chip. The accelerator chip can have a first set of pins configured to connect to the memory chip via wiring, as well as a second set of pins configured to connect to the SoC via wiring. The accelerator chip can be configured to perform and accelerate application-specific computations (e.g., AI computations) for the SoC, as well as use the memory chip as memory for the application-specific computations. For example, the accelerator chip can be an AI accelerator chip and the AI accelerator chip can be configured to perform and accelerate AI computations for the SoC, as well as use the memory chip as memory for the AI computations.

    MEMORY CHIP CONNECTING A SYSTEM ON A CHIP AND AN ACCELERATOR CHIP

    公开(公告)号:US20210081337A1

    公开(公告)日:2021-03-18

    申请号:US16573805

    申请日:2019-09-17

    Abstract: A memory chip (e.g., DRAM) connecting a SoC and an accelerator chip (e.g., an AI accelerator chip). A system including the memory chip and the accelerator chip. The system can include the SoC. The memory chip can include first memory cells to store and provide computation input data (e.g., AI computation input data) received from the SoC to be used by the accelerator chip as computation input (e.g., AI computation input). The memory chip can include second memory cells to store and provide first computation output data (e.g., AI computation output data) received from the accelerator chip to be retrieved by the SoC or reused by the accelerator chip as computation input. The memory chip can also include third memory cells to store second computation output data (e.g., non-AI computation output data) related to non-AI tasks received from the SoC to be retrieved by the SoC for non-AI tasks.

Patent Agency Ranking