IMAGE PROCESSING ACCELERATOR
    1.
    发明申请

    公开(公告)号:US20200210351A1

    公开(公告)日:2020-07-02

    申请号:US16234508

    申请日:2018-12-27

    Abstract: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    MULTICHANNEL MEMORY ARBITRATION AND INTERLEAVING SCHEME

    公开(公告)号:US20240211414A1

    公开(公告)日:2024-06-27

    申请号:US18599649

    申请日:2024-03-08

    CPC classification number: G06F13/1647

    Abstract: Arbitration and interleaving are performed with respect to memory requests in a memory controller that includes a set of interfaces, each configured to be coupled to a respective one of multiple external requestors, in which each interface receives memory requests from its associated external requestor. The memory controller further includes multiple sets of memory channel queues, one set for each interface, and multiple requestor arbitration modules, each associated with and coupled to a respective one of the multiple sets of memory channels. The memory controller further includes an interconnect coupled to the multiple requestor arbitration modules. The interconnect includes multiple external memory arbitration modules. Each of the requestor arbitration modules applies an arbitration algorithm to arbitrate among the memory requests in the associated set of memory channel queues. Each of the external memory arbitration modules also applies an arbitration algorithm to arbitrate among memory requests presented by the requestor arbitration modules.

    IMAGE PROCESSING ACCELERATOR
    5.
    发明申请

    公开(公告)号:US20220114120A1

    公开(公告)日:2022-04-14

    申请号:US17558252

    申请日:2021-12-21

    Abstract: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    PCIE PERIPHERAL SHARING
    6.
    发明申请

    公开(公告)号:US20210209036A1

    公开(公告)日:2021-07-08

    申请号:US17073925

    申请日:2020-10-19

    Abstract: A peripheral proxy subsystem is placed between multiple hosts, each having a root controller, and single root virtualization (SR-IOV) peripheral devices that are to be shared. The peripheral proxy subsystem provides a root controller for coupling to the endpoint of the SR-IOV peripheral device or devices and multiple endpoints for coupling to the root controllers of the hosts. The peripheral proxy subsystem maps the virtual functions of an SR-IOV peripheral device to the multiple endpoints as desired to allow the virtual functions to be allocated to the hosts. The physical function of the SR-IOV peripheral device is managed by the peripheral proxy device to provide the desired number of virtual functions. The virtual functions of the SR-IOV peripheral device are then presented to the appropriate host as a physical function or a virtual function.

    MACHINE LEARNING MODEL WITH WATERMARKED WEIGHTS

    公开(公告)号:US20220012312A1

    公开(公告)日:2022-01-13

    申请号:US17487517

    申请日:2021-09-28

    Abstract: In some examples, a system includes storage storing a machine learning model, wherein the machine learning model comprises a plurality of layers comprising multiple weights. The system also includes a processing unit coupled to the storage and operable to group the weights in each layer into a plurality of partitions; determine a number of least significant bits to be used for watermarking in each of the plurality of partitions; insert one or more watermark bits into the determined least significant bits for each of the plurality of partitions; and scramble one or more of the weight bits to produce watermarked and scrambled weights. The system also includes an output device to provide the watermarked and scrambled weights to another device.

    MACHINE LEARNING MODEL WITH WATERMARKED WEIGHTS

    公开(公告)号:US20190205508A1

    公开(公告)日:2019-07-04

    申请号:US16188560

    申请日:2018-11-13

    CPC classification number: G06F21/16 G06N3/0472 G06N20/00

    Abstract: In some examples, a system includes storage storing a machine learning model, wherein the machine learning model comprises a plurality of layers comprising multiple weights. The system also includes a processing unit coupled to the storage and operable to group the weights in each layer into a plurality of partitions; determine a number of least significant bits to be used for watermarking in each of the plurality of partitions; insert one or more watermark bits into the determined least significant bits for each of the plurality of partitions; and scramble one or more of the weight bits to produce watermarked and scrambled weights. The system also includes an output device to provide the watermarked and scrambled weights to another device.

Patent Agency Ranking