SYSTEM AND METHOD FOR DEEP LEARNING IMAGE SUPER RESOLUTION

    公开(公告)号:US20200090305A1

    公开(公告)日:2020-03-19

    申请号:US16693146

    申请日:2019-11-22

    Abstract: In a method for super resolution imaging, the method includes: receiving, by a processor, a low resolution image; generating, by the processor, an intermediate high resolution image having an improved resolution compared to the low resolution image; generating, by the processor, a final high resolution image based on the intermediate high resolution image and the low resolution image; and transmitting, by the processor, the final high resolution image to a display device for display thereby.

    SYSTEM AND METHOD FOR DEEP LEARNING IMAGE SUPER RESOLUTION

    公开(公告)号:US20180293707A1

    公开(公告)日:2018-10-11

    申请号:US15671036

    申请日:2017-08-07

    Abstract: In a method for super resolution imaging, the method includes: receiving, by a processor, a low resolution image; generating, by the processor, an intermediate high resolution image having an improved resolution compared to the low resolution image; generating, by the processor, a final high resolution image based on the intermediate high resolution image and the low resolution image; and transmitting, by the processor, the final high resolution image to a display device for display thereby.

    VIDEO DEPTH ESTIMATION BASED ON TEMPORAL ATTENTION

    公开(公告)号:US20240346673A1

    公开(公告)日:2024-10-17

    申请号:US18676414

    申请日:2024-05-28

    Abstract: A method of depth detection based on a plurality of video frames includes receiving a plurality of input frames including a first input frame, a second input frame, and a third input frame respectively corresponding to different capture times, convolving the first to third input frames to generate a first feature map, a second feature map, and a third feature map corresponding to the different capture times, calculating a temporal attention map based on the first to third feature maps, the temporal attention map including a plurality of weights corresponding to different pairs of feature maps from among the first to third feature maps, each weight of the plurality of weights indicating a similarity level of a corresponding pair of feature maps, and applying the temporal attention map to the first to third feature maps to generate a feature map with temporal attention.

    Video depth estimation based on temporal attention

    公开(公告)号:US11995856B2

    公开(公告)日:2024-05-28

    申请号:US18080599

    申请日:2022-12-13

    Abstract: A method of depth detection based on a plurality of video frames includes receiving a plurality of input frames including a first input frame, a second input frame, and a third input frame respectively corresponding to different capture times, convolving the first to third input frames to generate a first feature map, a second feature map, and a third feature map corresponding to the different capture times, calculating a temporal attention map based on the first to third feature maps, the temporal attention map including a plurality of weights corresponding to different pairs of feature maps from among the first to third feature maps, each weight of the plurality of weights indicating a similarity level of a corresponding pair of feature maps, and applying the temporal attention map to the first to third feature maps to generate a feature map with temporal attention.

Patent Agency Ranking