-
公开(公告)号:US20230230265A1
公开(公告)日:2023-07-20
申请号:US18098940
申请日:2023-01-19
Inventor: Myung Sik YOO , Minh Tri NGUYEN
IPC: G06T7/55 , G06V20/56 , G06N3/0475
CPC classification number: G06T7/55 , G06V20/56 , G06N3/0475 , G06T2207/20084
Abstract: Provided are a patch GAN-based depth completion method and apparatus in an autonomous vehicle. The patch-GAN-based depth completion apparatus according to the present invention comprises a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor for performing operations in a generating unit of a generative adversarial neural network comprising a first branch and a second branch based on an encoder-decoder comprising receiving an RGB image and a sparse image through a camera and LiDAR, generating a dense first depth map by processing color information of the RGB image through the first branch, generating a dense second depth map by up-sampling the sparse image through the second branch, generating a dense final depth map by fusing the first depth map and the second depth map, and determining, by a discriminating unit of the generative adversarial neural network, whether the final depth map is fake or real by dividing the final depth map and depth measurement data into a plurality of patches.
-
公开(公告)号:US20230230269A1
公开(公告)日:2023-07-20
申请号:US18098904
申请日:2023-01-19
Inventor: Myung Sik YOO , Minh Tri NGUYEN
CPC classification number: G06T7/579 , G06T7/248 , G06T3/4046 , G06T2207/10028 , G06T2207/10024 , G06T2207/30241 , G06T2207/20221 , G06T2207/20084
Abstract: Provided are a depth completion method and apparatus using spatial-temporal information. The depth completion apparatus according to the present invention comprises a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor for performing operations comprising receiving an RGB image and a sparse image through a camera and LiDAR, generating a dense first depth map by processing color information of the RGB image through a first branch based on an encoder-decoder, generating a dense second depth map by up-sampling the sparse image through a second branch based on an encoder-decoder, generating a third depth map by fusing the first depth map and the second depth map, and generating a final depth map including a trajectory of a moving object included in an RGB image continuously captured during movement by inputting the third depth map to a convolution long term short memory (LSTM).
-