IMAGE PROCESSING ACCELERATOR
    1.
    发明申请

    公开(公告)号:US20200210351A1

    公开(公告)日:2020-07-02

    申请号:US16234508

    申请日:2018-12-27

    摘要: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    ROBUST FRAME SIZE ERROR DETECTION AND RECOVERY MECHANISM TO MINIMIZE FRAME LOSS FOR CAMERA INPUT SUB-SYSTEMS

    公开(公告)号:US20210209390A1

    公开(公告)日:2021-07-08

    申请号:US16745589

    申请日:2020-01-17

    IPC分类号: G06K9/03 G06K9/42 G06K9/00

    摘要: An image data frame is received from an external source. An error concealment operation is performed on the received image data frame in response to determining that a first frame size of the received image data frame is erroneous. The first frame size of the image data frame is determined to be erroneous based on at least one frame synchronization signal associated with the image data frame. An image processing operation is performed on the received image data frame on which the error concealment operation has been performed, thereby enabling an image processing module to perform the image processing operation without entering into a deadlock state and thereby prevent a host processor from having to execute hardware resets of deadlocked modules.

    IMAGE PROCESSING ACCELERATOR
    3.
    发明申请

    公开(公告)号:US20220114120A1

    公开(公告)日:2022-04-14

    申请号:US17558252

    申请日:2021-12-21

    摘要: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    IMAGE PROCESSING ACCELERATOR
    4.
    发明申请

    公开(公告)号:US20200379928A1

    公开(公告)日:2020-12-03

    申请号:US16995364

    申请日:2020-08-17

    摘要: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    MACHINE LEARNING MODEL WITH WATERMARKED WEIGHTS

    公开(公告)号:US20220012312A1

    公开(公告)日:2022-01-13

    申请号:US17487517

    申请日:2021-09-28

    摘要: In some examples, a system includes storage storing a machine learning model, wherein the machine learning model comprises a plurality of layers comprising multiple weights. The system also includes a processing unit coupled to the storage and operable to group the weights in each layer into a plurality of partitions; determine a number of least significant bits to be used for watermarking in each of the plurality of partitions; insert one or more watermark bits into the determined least significant bits for each of the plurality of partitions; and scramble one or more of the weight bits to produce watermarked and scrambled weights. The system also includes an output device to provide the watermarked and scrambled weights to another device.

    SCALABLE HARDWARE THREAD SCHEDULER

    公开(公告)号:US20210326174A1

    公开(公告)日:2021-10-21

    申请号:US17138649

    申请日:2020-12-30

    摘要: A device includes a hardware data processing node configured to execute a respective task, and a hardware thread scheduler including a hardware task scheduler. The hardware task scheduler is coupled to the hardware data processing node and has a producer socket, a consumer socket, and a spare socket. The spare socket is configured to provide data control signals also provided by a first socket of the producer and consumer sockets responsive to a memory-mapped register being a first value. The spare socket is configured to provide data control signals also provided by a second socket of the producer and consumer sockets responsive to the memory-mapped register being a second value.

    MACHINE LEARNING MODEL WITH WATERMARKED WEIGHTS

    公开(公告)号:US20190205508A1

    公开(公告)日:2019-07-04

    申请号:US16188560

    申请日:2018-11-13

    IPC分类号: G06F21/16 G06F15/18 G06N3/04

    摘要: In some examples, a system includes storage storing a machine learning model, wherein the machine learning model comprises a plurality of layers comprising multiple weights. The system also includes a processing unit coupled to the storage and operable to group the weights in each layer into a plurality of partitions; determine a number of least significant bits to be used for watermarking in each of the plurality of partitions; insert one or more watermark bits into the determined least significant bits for each of the plurality of partitions; and scramble one or more of the weight bits to produce watermarked and scrambled weights. The system also includes an output device to provide the watermarked and scrambled weights to another device.