-
51.
公开(公告)号:EP3637309A1
公开(公告)日:2020-04-15
申请号:EP19195511.1
申请日:2019-09-05
申请人: Stradvision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Insu , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A learning method of a CNN (Convolutional Neural Network) for monitoring one or more blind spots of a monitoring vehicle is provided. The learning method includes steps of: a learning device, if training data corresponding to output from a detector on the monitoring vehicle is inputted, instructing a cue information extracting layer to uses class information and location information on a monitored vehicle included in the training data, thereby outputting cue information on the monitored vehicle; instructing an FC layer for monitoring the blind spots to perform neural network operations by using the cue information, thereby outputting a result of determining whether the monitored vehicle is located on one of the blind spots; and instructing a loss layer to generate loss values by referring to the result and its corresponding GT, thereby learning parameters of the FC layer for monitoring the blind spots by backpropagating the loss values.
-
52.
公开(公告)号:EP3467711A8
公开(公告)日:2019-05-29
申请号:EP18192815.1
申请日:2018-09-05
申请人: StradVision, Inc.
发明人: KIM, Yongjoong , NAM, Woonhyun , BOO, Sukhoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A learning method for improving image segmentation including steps of: (a) acquiring a (1-1)-th to a (1-K)-th feature maps through an encoding layer if a training image is obtained; (b) acquiring a (3-1)-th to a (3-H)-th feature maps by respectively inputting each output of the H encoding filters to a (3-1)-th to a (3-H)-th filters; (c) performing a process of sequentially acquiring a (2-K)-th to a (2-1)-th feature maps either by (i) allowing the respective H decoding filters to respectively use both the (3-1)-th to the (3-H)-th feature maps and feature maps obtained from respective previous decoding filters of the respective H decoding filters or by (ii) allowing respective K-H decoding filters that are not associated with the (3-1)-th to the (3-H)-th filters to use feature maps gained from respective previous decoding filters of the respective K-H decoding filters; and (d) adjusting parameters of CNN.
-
公开(公告)号:EP3467721A1
公开(公告)日:2019-04-10
申请号:EP18192819.3
申请日:2018-09-05
申请人: StradVision, Inc.
发明人: KIM, Yongjoong , NAM, Woonhyun , BOO, Sukhoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for generating feature maps by using a device adopting CNN including feature up-sampling networks (FPN) is disclosed. The method comprising steps of: (a) allowing, if the input image is obtained, a down-sampling block to acquire a down-sampling image by applying a predetermined operation to an input image (b) allowing, if the down-sampling image is obtained, each of a (1-1)-th to a (1-k)-th filter blocks to acquire each of a (1-1)-th to a (1-k)-th feature maps by applying one or more convolution operations to the down-sampling image and (c) allowing each of up-sampling blocks to receive a feature map from its corresponding filter block, to receive a feature map from its previous up-sampling block, and then rescale one feature map to be identical with the other feature map in size, and to apply a certain operation to both feature maps, thereby generating a (2-k)-th to a (2-1)-th feature maps.
-
公开(公告)号:EP3467698A1
公开(公告)日:2019-04-10
申请号:EP18192482.0
申请日:2018-09-04
申请人: StradVision, Inc.
发明人: KIM, Yongjoong , NAM, Woonhyun , BOO, Sukhoon , SUNG, Myungchul , YEO, Donghun , RYU, Wooju , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
IPC分类号: G06K20060101
摘要: A method of monitoring a blind spot of a monitoring vehicle by using a blind spot monitor is provided. The method includes steps of: the blind spot monitor (a) acquiring a feature map from rear video images, on condition that video images with reference vehicles in the blind spot are acquired, reference boxes for the reference vehicles are created, and the reference boxes are set as proposal boxes; (b) acquiring feature vectors for the proposal boxes on the feature map by pooling, inputting the feature vectors into a fully connected layer, acquiring classification and regression information; and (c) selecting proposal boxes by referring to the classification information, acquiring bounding boxes for the proposal boxes by using the regression information, determining the pose of the monitored vehicle corresponding to each of the bounding boxes, and determining whether a haphazard vehicle is located in the blind spot of the monitoring vehicle.
-
公开(公告)号:EP3690797A3
公开(公告)日:2020-08-12
申请号:EP20153297.5
申请日:2020-01-23
申请人: Stradvision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning an automatic labeling device for auto-labeling a base image of a base vehicle using sub-images of nearby vehicles is provided. The method includes steps of: a learning device inputting the base image and the sub-images into previous trained dense correspondence networks to generate dense correspondences; and into encoders to output convolution feature maps, inputting the convolution feature maps into decoders to output deconvolution feature maps; with an integer k from 1 to n, generating a k-th adjusted deconvolution feature map by translating coordinates of a (k+1)-th deconvolution feature map using a k-th dense correspondence; generating a concatenated feature map by concatenating the 1-st deconvolution feature map and the adjusted deconvolution feature maps; and inputting the concatenated feature map into a masking layer to output a semantic segmentation image and instructing a 1-st loss layer to calculate 1-st losses and updating decoder weights and encoder weights.
-
56.
公开(公告)号:EP3690859A1
公开(公告)日:2020-08-05
申请号:EP20153072.2
申请日:2020-01-22
申请人: Stradvision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for monitoring blind spots of a cycle using a smart helmet for a rider is provided. The method includes steps of: a blind-spot monitoring device, (a) if a video image of 1-st blind spots corresponding to the smart helmet is acquired, instructing an object detector to detect objects on the video image and confirming 1-st objects in the 1-st blind spots; and (b) determining a smart helmet orientation and a cycle traveling direction by referring to sensor information from part of a GPS sensor, an acceleration sensor, and a geomagnetic sensor on the smart helmet, confirming 2-nd objects, among the 1-st objects, in 2-nd blind spots corresponding to the cycle by referring to the smart helmet orientation and the cycle traveling direction, and displaying the 2-nd objects via an HUD or sounding an alarm that the 2-nd objects are in the 2-nd blind spots via a speaker.
-
公开(公告)号:EP3690845A1
公开(公告)日:2020-08-05
申请号:EP20153042.5
申请日:2020-01-22
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , Boo, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
IPC分类号: G08G1/01 , G08G1/0967
摘要: A method for providing an autonomous driving service platform for autonomous vehicles by using a competitive computing and information fusion is provided. And the method includes steps of: (a) a service server acquiring individual sensor data and individual driving data through sensors installed on at least part of the autonomous vehicles including a subject vehicle; (b) the service server performing (i) a process of acquiring autonomous driving source information for the subject vehicle by inputting specific sensor data of specific autonomous vehicles among the autonomous vehicles and subject sensor data of the subject vehicle to data processing servers and (ii) a process of acquiring circumstance-specific performance information on the data processing servers from a circumstance-specific performance DB; and (c) the service server transmitting the autonomous driving source information and the circumstance-specific performance information to the subject vehicle, to thereby instruct the subject vehicle to perform the autonomous driving.
-
公开(公告)号:EP3690797A2
公开(公告)日:2020-08-05
申请号:EP20153297.5
申请日:2020-01-23
申请人: Stradvision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for learning an automatic labeling device for auto-labeling a base image of a base vehicle using sub-images of nearby vehicles is provided. The method includes steps of: a learning device inputting the base image and the sub-images into previous trained dense correspondence networks to generate dense correspondences; and into encoders to output convolution feature maps, inputting the convolution feature maps into decoders to output deconvolution feature maps; with an integer k from 1 to n, generating a k-th adjusted deconvolution feature map by translating coordinates of a (k+1)-th deconvolution feature map using a k-th dense correspondence; generating a concatenated feature map by concatenating the 1-st deconvolution feature map and the adjusted deconvolution feature maps; and inputting the concatenated feature map into a masking layer to output a semantic segmentation image and instructing a 1-st loss layer to calculate 1-st losses and updating decoder weights and encoder weights.
-
公开(公告)号:EP3690795A1
公开(公告)日:2020-08-05
申请号:EP20153261.1
申请日:2020-01-23
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
IPC分类号: G06Q50/30
摘要: A learning method for detecting driving events occurring during driving, to thereby detect driving scenarios including at least part of the driving events is provided. And the method includes steps of: (a) a learning device, if a specific enumerated event vector, including each of pieces of information on each of specific driving events as its specific components in a specific order, is acquired, instructing a RNN to apply RNN operations to the specific components, of the specific enumerated event vector, in the specific order, to thereby detect a specific predicted driving scenario including the specific driving events; (b) the learning device instructing a loss module to generate an RNN loss by referring to the specific predicted driving scenario and a specific GT driving scenario, which has been acquired beforehand, and to perform a BPTT by using the RNN loss, to thereby learn at least part of parameters of the RNN.
-
公开(公告)号:EP3690754A1
公开(公告)日:2020-08-05
申请号:EP20152231.5
申请日:2020-01-16
申请人: StradVision, Inc.
发明人: KIM, Kye-Hyeon , KIM, Yongjoong , KIM, Hak-Kyoung , NAM, Woonhyun , BOO, SukHoon , SUNG, Myungchul , SHIN, Dongsoo , YEO, Donghun , RYU, Wooju , LEE, Myeong-Chun , LEE, Hyungsoo , JANG, Taewoong , JEONG, Kyungjoong , JE, Hongmo , CHO, Hojin
摘要: A method for creating a traffic scenario in a virtual driving environment is provided. The method includes steps of: a traffic scenario-generating device, (a) on condition that driving data have been acquired which are created using previous traffic data corresponding to discrete traffic data extracted by a vision-based ADAS from a past driving video and detailed traffic data corresponding to sequential traffic data from sensors of data-collecting vehicles in a real driving environment, inputting the driving data into a scene analyzer to extract driving environment information and into a vehicle information extractor to extract vehicle status information on an ego vehicle, and generating sequential traffic logs according to a driving sequence; and (b) inputting the sequential traffic logs into a scenario augmentation network to augment the sequential traffic logs using critical events, and generate the traffic scenario, verifying the traffic scenario, and mapping the traffic scenario onto a traffic simulator.
-
-
-
-
-
-
-
-
-