-
公开(公告)号:US10373317B1
公开(公告)日:2019-08-06
申请号:US16254545
申请日:2019-01-22
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for an attention-driven image segmentation by using at least one adaptive loss weight map is provided to be used for updating HD maps required to satisfy level 4 of autonomous vehicles. By this method, vague objects such as lanes and road markers at distance may be detected more accurately. Also, this method can be usefully performed in military, where identification of friend or foe is important, by distinguishing aircraft marks or military uniforms at distance. The method includes steps of: a learning device instructing a softmax layer to generate softmax scores; instructing a loss weight layer to generate loss weight values by applying loss weight operations to predicted error values generated therefrom; and instructing a softmax loss layer to generate adjusted softmax loss values by referring to initial softmax loss values, generated by referring to the softmax scores and their corresponding GTs, and the loss weight values.
-
公开(公告)号:US10373027B1
公开(公告)日:2019-08-06
申请号:US16262142
申请日:2019-01-30
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for acquiring a sample image for label-inspecting among auto-labeled images for learning a deep learning network, optimizing sampling processes for manual labeling, and reducing annotation costs is provided. The method includes steps of: a sample image acquiring device, generating a first and a second images, instructing convolutional layers to generate a first and a second feature maps, instructing pooling layers to generate a first and a second pooled feature maps, and generating concatenated feature maps; instructing a deep learning classifier to acquire the concatenated feature maps, to thereby generate class information; and calculating probabilities of abnormal class elements in an abnormal class group, determining whether the auto-labeled image is a difficult image, and selecting the auto-labeled image as the sample image for label-inspecting. Further, the method can be performed by using a robust algorithm with multiple transform pairs. By the method, hazardous situations are detected more accurately.
-
公开(公告)号:US10373026B1
公开(公告)日:2019-08-06
申请号:US16259355
申请日:2019-01-28
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method of learning for deriving virtual feature maps from virtual images, whose characteristics are same as or similar to those of real feature maps derived from real images, by using GAN including a generating network and a discriminating network capable of being applied to domain adaptation is provided to be used in virtual driving environments. The method includes steps of: (a) a learning device instructing the generating network to apply convolutional operations to an input image, to thereby generate a output feature map, whose characteristics are same as or similar to those of the real feature maps; and (b) instructing a loss unit to generate losses by referring to an evaluation score, corresponding to the output feature map, generated by the discriminating network. By the method using a runtime input transformation, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
-
公开(公告)号:US10373023B1
公开(公告)日:2019-08-06
申请号:US16258877
申请日:2019-01-28
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning a runtime input transformation of real images into virtual images by using a cycle GAN capable of being applied to domain adaptation is provided. The method can be also performed in virtual driving environments. The method includes steps of: (a) (i) instructing first transformer to transform a first image to second image, (ii-1) instructing first discriminator to generate a 1_1-st result, and (ii-2) instructing second transformer to transform the second image to third image, whose characteristics are same as or similar to those of the real images; (b) (i) instructing the second transformer to transform a fourth image to fifth image, (ii-1) instructing second discriminator to generate a 2_1-st result, and (ii-2) instructing the first transformer to transform the fifth image to sixth image; (c) calculating losses. By the method, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
-
公开(公告)号:US10373004B1
公开(公告)日:2019-08-06
申请号:US16263123
申请日:2019-01-31
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for detecting lane elements, which are unit regions including pixels of lanes in an input image, to plan the drive path of an autonomous vehicle by using a horizontal filter mask is provided. The method includes steps of: a computing device acquiring a segmentation score map from a CNN using the input image; instructing a post-processing module, capable of performing data processing at an output end of the CNN, to generate a magnitude map by using the segmentation score map and the horizontal filter mask; instructing the post-processing module to determine each of lane element candidates per each of rows of the segmentation score map by referring to values of the magnitude map; and instructing the post-processing module to apply estimation operations to each of the lane element candidates per each of the rows, to thereby detect each of the lane elements.
-
公开(公告)号:US10372573B1
公开(公告)日:2019-08-06
申请号:US16258841
申请日:2019-01-28
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
IPC分类号: G06F11/263 , G06F11/00 , G06F11/26
摘要: A method for generating one or more test patterns and selecting optimized test patterns among the test patterns to verify an integrity of convolution operations is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety of the convolution operations, and annotation cost reduction. The method includes: a computing device (a) instructing a pattern generating unit to generate the test patterns by using a certain function such that saturation does not occur while at least one original CNN applies the convolution operations to the test patterns; (b) instructing a pattern evaluation unit to generate each of evaluation scores of each of the test patterns by referring to each of the test patterns and one or more parameters of the original CNN; and (c) instructing a pattern selection unit to select the optimized test patterns among the test patterns by referring to the evaluation scores.
-
公开(公告)号:US10325179B1
公开(公告)日:2019-06-18
申请号:US16254982
申请日:2019-01-23
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for pooling at least one ROI by using one or more masking parameters is provided. The method is applicable to mobile devices, compact networks, and the like via hardware optimization. The method includes steps of: (a) a computing device, if an input image is acquired, instructing a convolutional layer of a CNN to generate a feature map corresponding to the input image; (b) the computing device instructing an RPN of the CNN to determine the ROI corresponding to at least one object included in the input image by using the feature map; (c) the computing device instructing an ROI pooling layer of the CNN to apply each of pooling operations correspondingly to each of sub-regions in the ROI by referring to each of the masking parameters corresponding to each of the pooling operations, to thereby generate a masked pooled feature map.
-
公开(公告)号:US10311338B1
公开(公告)日:2019-06-04
申请号:US16132368
申请日:2018-09-15
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A learning method of a CNN capable of detecting one or more lanes is provided. The learning method includes steps of: a learning device (a) applying convolution operations to an image, to generate a feature map, and generating lane candidate information; (b) generating a first pixel data map including information on pixels in the image and their corresponding pieces of first data, wherein main subsets from the first data include distance values from the pixels to their nearest first lane candidates by Using a direct regression, and generating a second pixel data map including information on the pixels and their corresponding pieces of second data, wherein main subsets from the second data include distance values from the pixels to their nearest second lane candidates by using the direct regression; and (c) detecting the lanes by inference to the first pixel data map and the second pixel data map.
-
79.
公开(公告)号:US10311321B1
公开(公告)日:2019-06-04
申请号:US16171601
申请日:2018-10-26
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning parameters of a CNN based on regression losses is provided. The method includes steps of: a learning device instructing a first to an n-th convolutional layers to generate a first to an n-th encoded feature maps; instructing an n-th to a first deconvolutional layers to generate an n-th to a first decoded feature maps from the n-th encoded feature map; generating an obstacle segmentation result by referring to a feature of the decoded feature maps; generating the regression losses by referring to differences of distances between each location of the specific rows, where bottom lines of nearest obstacles are estimated as being located per each of columns of a specific decoded feature map, and each location of exact rows, where the bottom lines are truly located per each of the columns on a GT; and backpropagating the regression losses, to thereby learn the parameters.
-
80.
公开(公告)号:US10303980B1
公开(公告)日:2019-05-28
申请号:US16121681
申请日:2018-09-05
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning parameters of a CNN capable of detecting obstacles in a training image is provided. The method includes steps of: a learning device (a) receiving the training image and instructing convolutional layers to generate encoded feature maps from the training image; (b) instructing the deconvolutional layers to generate decoded feature maps; (c) supposing that each cell of a grid with rows and columns is generated by dividing the decoded feature map with respect to a direction of the rows and the columns, concatenating features of the rows per column in a direction of a channel, to generate a reshaped feature map; (d) calculating losses referring to the reshaped feature map and its GT image in which each row is indicated as corresponding to GT positions where a nearest obstacle is on column from its corresponding lowest cell thereof along the columns; and (e) backpropagating the loss.
-
-
-
-
-
-
-
-
-