-
公开(公告)号:US20210264271A1
公开(公告)日:2021-08-26
申请号:US17260258
申请日:2019-08-20
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Binyam Gebrekidan Gebre , Stojan Trajanovski
Abstract: An adaptable neural network system (1) formed of two neural networks (4, 5). One of the neural networks (5) adjusts a structure of the other neural network (4) based on information about a specific task each time that new second input data (12) indicative of a desired task is received by the one neural network (5), so that the other neural network (4) is adapted to perform that specific task. Thus, an adaptable neural network system (1) capable of performing different tasks on input data (11) can be realized.
-
公开(公告)号:US11521064B2
公开(公告)日:2022-12-06
申请号:US16768783
申请日:2018-11-30
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Dimitrios Mavroeidis , Binyam Gebrekidan Gebre , Stojan Trajanovski
Abstract: A concept for training a neural network model. The concept comprises receiving training data and test data, each comprising a set of annotated images. A neural network model is trained using the training data with an initial regularization parameter. Loss functions of the neural network for both the training data and the test data are used to modify the regularization parameter, and the neural network model is retrained using the modified regularization parameter. This process is iteratively repeated until the loss functions both converge. A system, method and a computer program product embodying this concept are disclosed.
-
公开(公告)号:US20200372344A1
公开(公告)日:2020-11-26
申请号:US16768783
申请日:2018-11-30
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Dimitrios Mavroeidis , Binyam Gebrekidan Gebre , Stojan Trajanovski
Abstract: A concept for training a neural network model. The concept comprises receiving training data and test data, each comprising a set of annotated images. A neural network model is trained using the training data with an initial regularization parameter. Loss functions of the neural network for both the training data and the test data are used to modify the regularization parameter, and the neural network model is retrained using the modified regularization parameter. This process is iteratively repeated until the loss functions both converge. A system, method and a computer program product embodying this concept are disclosed.
-
公开(公告)号:US12079989B2
公开(公告)日:2024-09-03
申请号:US17600405
申请日:2020-04-03
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Dimitrios Mavroeidis , Stojan Trajanovski , Bart Jacob Bakker
CPC classification number: G06T7/0012 , G06T7/12 , G06T7/187 , G06T7/70 , G06V10/70 , G16H30/40 , G16H50/20 , G06T2207/20076 , G06T2207/20081 , G06T2207/20084 , G06T2207/20092 , G06T2207/30096
Abstract: The present invention provides a method, computer program and processing system for identifies boundaries of lesions within image data. The image data is processed using a machine learning algorithm to generate probability data and uncertainty data. The probability data provides, for each image data point of the image data, a probability data points indicating a probability that said image data point is part of a lesion. The uncertainty data provides, for each probability data point, an uncertainty data point indicating an uncertainty of the said probability data point. The uncertainty data is processed to identify or correct boundaries of the lesions.
-
公开(公告)号:US12062225B2
公开(公告)日:2024-08-13
申请号:US17615946
申请日:2020-05-25
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Bart Jacob Bakker , Dimitrios Mavroeidis , Stojan Trajanovski
IPC: G06V10/764 , G06V10/772 , G06V10/774 , G06V10/82
CPC classification number: G06V10/764 , G06V10/772 , G06V10/774 , G06V10/82
Abstract: Aspects and embodiments relate to a method of providing a representation of a feature identified by a deep neural network as being relevant to an outcome, a computer program product and apparatus configured to perform that method. The method comprises: providing the deep neural network with a training library comprising: a plurality of samples associated with the outcome; using the deep neural network to recognise a feature in the plurality of samples associated with the outcome; creating a feature recognition library from an input library by identifying one or more elements in each of a plurality of samples in the input library which trigger recognition of the feature by the deep neural network; using the feature recognition library to synthesise a plurality of one or more elements of a sample which have characteristics which trigger recognition of the feature by the deep neural network; and using the synthesised plurality of one or more elements to provide a representation of the feature identified by the deep neural network in the plurality of samples associated with the outcome. Accordingly, rather than visualising a single instance of one or more elements in a sample which trigger a feature associated with an outcome, it is possible to visualise a range of samples including elements which would trigger a feature associated with an outcome, thus enabling a more comprehensive view of operation of a deep neural network in relation to a particular feature.
-
公开(公告)号:US11612713B2
公开(公告)日:2023-03-28
申请号:US16832509
申请日:2020-03-27
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Gary Nelson Garcia Molina , Ulf Grossekathöfer , Stojan Trajanovski , Jesse Salazar , Tsvetomira Kirova Tsoneva , Sander Theodoor Pastoor , Antonio Aquino , Adrienne Heinrich , Birpal Singh Sachdev
Abstract: Typically, high NREM stage N3 sleep detection accuracy is achieved using a frontal electrode referenced to an electrode at a distant location on the head (e.g., the mastoid, or the earlobe). For comfort and design considerations it is more convenient to have active and reference electrodes closely positioned on the frontal region of the head. This configuration, however, significantly attenuates the signal, which degrades sleep stage detection (e.g., N3) performance. The present disclosure describes a deep neural network (DNN) based solution developed to detect sleep using frontal electrodes only. N3 detection is enhanced through post-processing of the soft DNN outputs. Detection of slow-waves and sleep micro-arousals is accomplished using frequency domain thresholds. Volume modulation uses a high-frequency/low-frequency spectral ratio extracted from the frontal signal.
-
公开(公告)号:US11301995B2
公开(公告)日:2022-04-12
申请号:US16696926
申请日:2019-11-26
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Dimitrios Mavroeidis , Bart Jacob Bakker , Stojan Trajanovski
Abstract: Presented are concepts for feature identification in medical imaging of a subject. One such concept processes a medical image with a Bayesian deep learning network to determine a first image feature of interest and an associated uncertainty value, the first image feature being located in a first sub-region of the image. It also processes the medical image with a generative adversarial network to determine a second image feature of interest within the first sub-region of the image and an associated uncertainty value. Based on the first and second image features and their associated uncertainty values, the first sub-region of the image is classified.
-
公开(公告)号:US20210326706A1
公开(公告)日:2021-10-21
申请号:US17271036
申请日:2019-08-19
Applicant: KONINKLIJKE PHILIPS N.V.
Inventor: Bart Jacob Bakker , Dimitrios Mavroeidis , Stojan Trajanovski
Abstract: The invention relates to a trained model, such as a trained neural network, which is trained on training data. System and computer-implemented methods are provided for generating metadata which encodes a numerical characteristic of the training data of the trained model, and for using the metadata to determine conformance of input data of the trained model to the numerical characteristics of the training data. If the input data does not conform to the numerical characteristics, the use of the trained model on the input data may be considered out-of-specification (‘out-of-spec’). Accordingly, a system applying the trained model to the input data may, for example, warn a user of the non-conformance, or may decline to apply the trained model to the input data, etc.
-
-
-
-
-
-
-