-
公开(公告)号:EP3690716A1
公开(公告)日:2020-08-05
申请号:EP20152471.7
申请日:2020-01-17
申请人: StradVision, Inc.
发明人: Kim, Kye-Hyeon , Kim, Yongjoong , Kim, Insu , Kim, Hak-Kyoung , Nam, Woonhyun , Boo, SukHoon , Sung, Myungchul , Yeo, Donghun , Ryu, Wooju , Jang, Taewoong , Jeong, Kyungjoong , Je, Hongmo , Cho, Hojin
摘要: A method for merging object detection information detected by object detectors, each of which corresponds to each of cameras located nearby, by using V2X-based auto labeling and evaluation, wherein the object detectors detect objects in each of images generated from each of the cameras by image analysis based on deep learning is provided. The method includes steps of: if first to n-th object detection information are respectively acquired from a first to an n-th object detectors in a descending order of degrees of detection reliabilities, a merging device generating (k-1)-th object merging information by merging (k-2)-th objects and k-th objects through matching operations, and re-projecting the (k-1)-th object merging information onto an image, by increasing k from 3 to n. The method can be used for a collaborative driving or an HD map update through V2X-enabled applications, sensor fusion via multiple vehicles, and the like.
-
公开(公告)号:EP3690714A1
公开(公告)日:2020-08-05
申请号:EP20151988.1
申请日:2020-01-15
申请人: StradVision, Inc.
发明人: Kim, Kye-Hyeon , Kim, Yongjoong , Kim, Insu , Kim, Hak-Kyoung , Nam, Woonhyun , Boo, SukHoon , Sung, Myungchul , Yeo, Donghun , Ryu, Wooju , Jang, Taewoong , Jeong, Kyungjoong , Je, Hongmo , Cho, Hojin
摘要: A method for acquiring a sample image for label-inspecting among auto-labeled images for learning a deep learning network, optimizing sampling processes for manual labeling, and reducing annotation costs is provided. The method includes steps of: a sample image acquiring device, generating a first and a second images, instructing convolutional layers to generate a first and a second feature maps, instructing pooling layers to generate a first and a second pooled feature maps, and generating concatenated feature maps; instructing a deep learning classifier to acquire the concatenated feature maps, to thereby generate class information; and calculating probabilities of abnormal class elements in an abnormal class group, determining whether the auto-labeled image is a difficult image, and selecting the auto-labeled image as the sample image for label-inspecting. Further, the method can be performed by using a robust algorithm with multiple transform pairs. By the method, hazardous situations are detected more accurately.
-
公开(公告)号:EP3686806A1
公开(公告)日:2020-07-29
申请号:EP19207673.5
申请日:2019-11-07
申请人: Stradvision, Inc.
发明人: Kim, Kye-Hyeon , Kim, Yongjoong , Kim, Insu , Kim, Hak-Kyoung , Nam, Woonhyun , Boo, SukHoon , Sung, Myungchul , Yeo, Donghun , Ryu, Wooju , Jang, Taewoong , Jeong, Kyungjoong , Je, Hongmo , Cho, Hojin
摘要: A CNN-based method for meta learning, i.e., learning to learning, by using a learning device including convolutional layers capable of applying convolution operations to an image or its corresponding input feature maps to generate output feature maps, and residual networks capable of feed-forwarding the image or its corresponding input feature maps to next convolutional layer through bypassing the convolutional layers or its sub-convolutional layers is provided. The CNN-based method includes steps of: the learning device (a) selecting a specific residual network to be dropped out among the residual networks; (b) feeding the image into a transformed CNN where the specific residual network is dropped out, and outputting a CNN output; and (c) calculating losses by using the CNN output and its corresponding GT, and adjusting parameters of the transformed CNN. Further, the CNN-based method can be also applied to layer-wise dropout, stochastic ensemble, virtual driving, and the like.
-
公开(公告)号:EP3686802A1
公开(公告)日:2020-07-29
申请号:EP20151740.6
申请日:2020-01-14
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for generating one or more test patterns and selecting optimized test patterns among the test patterns to verify an integrity of convolution operations is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety of the convolution operations, and annotation cost reduction. The method includes steps of: a computing device (a) instructing a pattern generating unit to generate the test patterns by using a certain function such that saturation does not occur while at least one original CNN applies the convolution operations to the test patterns; (b) instructing a pattern evaluation unit to generate each of evaluation scores of each of the test patterns by referring to said each of the test patterns and one or more parameters of the original CNN; and (c) instructing a pattern selection unit to select the optimized test patterns among the test patterns by referring to the evaluation scores.
-
公开(公告)号:EP3686794A1
公开(公告)日:2020-07-29
申请号:EP20151251.4
申请日:2020-01-10
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning parameters of a CNN using a 1×K convolution operation or a K×1 convolution operation is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a reshaping layer to two-dimensionally concatenate features in each group comprised of corresponding K channels of a training image or its processed feature map, to thereby generate a reshaped feature map, and instructing a subsequent convolutional layer to apply the 1 xK or the Kx1 convolution operation to the reshaped feature map, to thereby generate an adjusted feature map; and (b) instructing an output layer to refer to features on the adjusted feature map or its processed feature map, and instructing a loss layer to calculate losses by referring to an output from the output layer and its corresponding GT.
-
公开(公告)号:EP3686791A1
公开(公告)日:2020-07-29
申请号:EP19219877.8
申请日:2019-12-27
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using an image concatenation and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing an image-manipulating network to generate n manipulated images; instructing an RPN to generate first to n-th object proposals respectively in the manipulated images, and instructing an FC layer to generate first to n-th object detection information; and instructing the target object merging network to merge the object proposals and merge the object detection information. In this method, the object proposals can be generated by using lidar. The method can be useful for multi-camera, SVM(surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
-
公开(公告)号:EP3686790A1
公开(公告)日:2020-07-29
申请号:EP19219455.3
申请日:2019-12-23
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (1) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating each of pixels, per each of ROIs, in corresponding locations on pooled ROI feature maps; and (2) (i) instructing a second transposing layer or a classifying layer to divide an adjusted feature map, whose volume is adjusted from the integrated feature map, by each of the pixels, and instructing the classifying layer to generate object information on the ROIs, and (ii) backpropagating object losses. Size of a chip can be decreased as convolution operations and fully connected layer operations are performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process.
-
公开(公告)号:EP3686784A1
公开(公告)日:2020-07-29
申请号:EP19220210.9
申请日:2019-12-31
申请人: StradVision, Inc.
发明人: Kim, Kye-Hyeon , Kim, Yongjoong , Kim, Insu , Kim, Hak-Kyoung , Nam, Woonhyun , Boo, SukHoon , Sung, Myungchul , Yeo, Donghun , Ryu, Wooju , Jang, Taewoong , Jeong, Kyungjoong , Je, Hongmo , Cho, Hojin
摘要: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device, if a test image is acquired, instructing a non-object detector to acquire non-object location information for testing and class information of the non-objects for testing by detecting the non-objects for testing on the test image; (b) the computing device instructing the grid generator to generate section information by referring to the non-object location information for testing; (c) the computing device instructing a neural network to determine parameters for testing; (d) the computing device instructing the neural network to apply the neural network operations to the test image by using each of the parameters for testing, to thereby generate one or more neural network outputs.
-
公开(公告)号:EP3686783A1
公开(公告)日:2020-07-29
申请号:EP19220179.6
申请日:2019-12-31
申请人: StradVision, Inc.
发明人: Kim, Kye-Hyeon , Kim, Yongjoong , Kim, Insu , Kim, Hak-Kyoung , Nam, Woonhyun , Boo, SukHoon , Sung, Myungchul , Yeo, Donghun , Ryu, Wooju , Jang, Taewoong , Jeong, Kyungjoong , Je, Hongmo , Cho, Hojin
摘要: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device instructing a pair detector to acquire information on locations and classes of pairs for testing by detecting the pairs for testing; (b) the computing device instructing the grid generator to generate section information by referring to the information on the locations of the pairs for testing; (c) the computing device instructing a neural network to determine parameters for testing by referring to parameters for training which have been learned by using information on pairs for training; and (d) the computing device instructing the neural network to apply the neural network operations to a test image by using each of the parameters for testing to thereby generate one or more neural network outputs.
-
公开(公告)号:EP3686781A1
公开(公告)日:2020-07-29
申请号:EP19219842.2
申请日:2019-12-27
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning parameters of an object detector with hardware optimization based on a CNN for detection at distance or military purpose using an image concatenation is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: (a) concatenating n manipulated images which correspond to n target regions; (b) instructing an RPN to generate first to n-th object proposals in the n manipulated images by using an integrated feature map, and instructing a pooling layer to apply pooling operations to regions, corresponding to the first to the n-th object proposals, on the integrated feature map; and (c) instructing an FC loss layer to generate first to n-th FC losses by referring to the object detection information, outputted from an FC layer.
-
-
-
-
-
-
-
-
-