FEDERATED TEACHER-STUDENT MACHINE LEARNING

    公开(公告)号:US20220012637A1

    公开(公告)日:2022-01-13

    申请号:US17370462

    申请日:2021-07-08

    Abstract: A node for a federated machine learning system that comprises the node and one or more other nodes configured for the same machine learning task, the node comprising: a federated student machine learning network configured to update a machine learning model in dependence upon updated machine learning models of the one or more node; a teacher machine learning network; means for receiving unlabeled data; means for teaching, using supervised learning, at least the federated first machine learning network using the teacher machine learning network, wherein the teacher machine learning network is configured to receive the data and produce pseudo labels for supervised learning using the data and wherein the federated student machine learning network is configured to perform supervised learning in dependence upon the same received data and the pseudo-labels.

    METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING AN ATTENTION BLOCK FOR NEURAL NETWORK-BASED IMAGE AND VIDEO COMPRESSION

    公开(公告)号:US20240289590A1

    公开(公告)日:2024-08-29

    申请号:US18572100

    申请日:2022-06-16

    CPC classification number: G06N3/045

    Abstract: Various embodiments provide a method, an apparatus, and computer program product. The method comprising: defining an attention block comprising: a set of initial neural network layers, wherein each layer is caused to process an output of a previous layer, and wherein a first layer processes an input of a dense split attention block; core attention blocks process one or more outputs of the set of initial neural network layers; a concatenation block for concatenating one or more outputs of the core attention blocks and at least one intermediate output of the set of initial neural network layers; one or more final neural network layers process at least the output of the concatenation block; and a summation block caused to sum an output of the final neural network layers and an input to the attention block; and providing an output of the summation block as a final output of the attention block.

Patent Agency Ranking