-
公开(公告)号:US20230196067A1
公开(公告)日:2023-06-22
申请号:US17554656
申请日:2021-12-17
Applicant: Lemon Inc.
Inventor: Peng Wang , Dawei Sun , Xiaochen Lian
IPC: G06N3/04 , G06V10/776
CPC classification number: G06N3/0427 , G06N3/0454 , G06V10/776
Abstract: The present disclosure describes techniques of identifying optimal scheme of knowledge distillation (KD) for vision tasks. The techniques comprise configuring a search space by establishing a plurality of pathways between a teacher network and a student network and assigning an importance factor to each of the plurality of pathways; searching the optimal KD scheme by updating the importance factor and parameters of the student network during a process of training the student network; and performing KD from the teacher network to the student network by retraining the student network based at least in part on the optimized importance factors.
-
公开(公告)号:US20220391636A1
公开(公告)日:2022-12-08
申请号:US17342486
申请日:2021-06-08
Applicant: Lemon Inc.
Inventor: Xiaochen Lian , Linjie Yang , Peng Wang , Xiaojie Jin , Mingyu Ding
Abstract: Systems and methods for searching a search space are disclosed. Some examples may include using a first parallel module including a first plurality of stacked searching blocks and a second plurality of stacked searching blocks to output first feature maps of a first resolution and to output second feature maps of a second resolution. In some examples, a fusion module may include a plurality of searching blocks, where the fusion module is configured to generate multiscale feature maps by fusing one or more feature maps of the first resolution received from the first parallel module with one or more feature maps of the second resolution received from the first parallel module, and wherein the fusion module is configured to output the multiscale feature maps and output third feature maps of a third resolution.
-
公开(公告)号:US20220391635A1
公开(公告)日:2022-12-08
申请号:US17342483
申请日:2021-06-08
Applicant: Lemon Inc.
Inventor: Xiaochen Lian , Mingyu Ding , Linjie Yang , Peng Wang , Xiaojie Jin
Abstract: Systems and methods for obtaining attention features are described. Some examples may include: receiving, at a projector of a transformer, a plurality of tokens associated with image features of a first dimensional space; generating, at the projector of the transformer, projected features by concatenating the plurality of tokens with a positional map, the projected features having a second dimensional space that is less than the first dimensional space; receiving, at an encoder of the transformer, the projected features and generating encoded representations of the projected features using self-attention; decoding, at a decoder of the transformer, the encoded representations and obtaining a decoded output; and projecting the decoded output to the first dimensional space and adding the image features of the first dimensional space to obtain attention features associated with the image features.
-
-