-
公开(公告)号:US20210219839A1
公开(公告)日:2021-07-22
申请号:US17059868
申请日:2019-04-10
Applicant: VUNO, INC.
Inventor: Sang Keun KIM , Hyun-Jun KIM , Kyuhwan JUNG , Jae Min SON
Abstract: The present invention relates to a method for classifying a fundus image and a device using same. Specifically, according to the method of the present invention, a computing device acquires a fundus image of a subject, generates classification information of the fundus image, generates an interpretation text on the basis of the classification information, and provides the interpretation text to an external entity.
-
公开(公告)号:US20210407081A1
公开(公告)日:2021-12-30
申请号:US17360897
申请日:2021-06-28
Applicant: VUNO Inc.
Inventor: Byeonguk BAE , Kyuhwan JUNG
Abstract: According to an embodiment of the present disclosure, a method of assessing bone age by using a neural network performed by a computing device is disclosed. The method includes receiving an analysis image which is a target of bone age assessment; and assessing bone age of the target by inputting the analysis image into a bone age analysis model comprising one or more neural networks. The bone age analysis model, which is trained by supervised learning based on an attention guide label, includes at least one attention module for intensively analyzing a main region of the analysis image.
-
公开(公告)号:US20210374948A1
公开(公告)日:2021-12-02
申请号:US16963408
申请日:2019-01-18
Applicant: VUNO, INC.
Inventor: Kyuhwan JUNG
Abstract: The present invention relates to a method for reconstructing an image and an apparatus using the same Particularly, according to the method of the present invention, when a series of first slice images of a subject are inputted in a computing device, the computing device generates, from the first slice images, second slice images having a second slice thickness different from a first thickness, which is the slice thickness of the first slice image, and provides the generated second slice images.
-
公开(公告)号:US20200288972A1
公开(公告)日:2020-09-17
申请号:US16759594
申请日:2018-07-18
Applicant: VUNO, INC. , SEOUL NATIONAL UNIVERSITY HOSPITAL
Inventor: Sang Jun PARK , Joo Young SHIN , Jae Min SON , Sang Keun KIM , Kyuhwan JUNG , Hyun-Jun KIM
IPC: A61B3/12 , A61B3/14 , G06K9/46 , G06T7/11 , G06T7/70 , G06T7/00 , G16H50/20 , G16H50/50 , G16H30/40
Abstract: The present invention relates to a method for supporting reading of a fundus image of a subject, and a computing device using the same. Specifically, the computing device according to the present invention acquires the fundus image of the subject, extracts attribute information from the fundus image on the basis of a machine learning model for extracting the attribute information of the fundus image, and provides the extracted attribute information to an external entity. In addition, when evaluation information on the extracted attribute information or modification information on the attribute information is acquired, the computing device according to the present invention can also update the machine learning model on the basis of the acquired evaluation information or modification information.
-
公开(公告)号:US20210398280A1
公开(公告)日:2021-12-23
申请号:US17354861
申请日:2021-06-22
Applicant: VUNO Inc.
Inventor: Byeonguk BAE , Kyuhwan JUNG
Abstract: According to an embodiment of the present disclosure, a computer program stored in a computer readable storage medium is disclosed. The computer program includes instructions for causing one or more processors to estimate bone age from a bone image, and the instructions include: estimating a RUS score for each of one or more partial bone images using a partial bone RUS score estimation model comprising one or more layers, and wherein the one or more partial bone images are generated from a whole bone image; and estimating bone age corresponding to the whole bone image using one or more RUS scores estimated for each of the one or more partial bone images, in which the partial bone RUS score estimation model is trained by using a labeled partial bone image as training data, and is trained by adjusting feature values calculated from the one or more layers.
-
-
-
-