-
公开(公告)号:EP4394664A1
公开(公告)日:2024-07-03
申请号:EP22217128.2
申请日:2022-12-29
申请人: Zenseact AB
IPC分类号: G06N3/094 , G06N20/20 , G06N3/045 , G06N3/09 , G06N3/098 , G06N3/091 , G06N3/096 , G06N3/0985
CPC分类号: G06N20/20 , G06N3/09 , G06N3/045 , G06N3/098 , G06N3/0985 , G06N3/096 , G06N3/094 , G06N3/091 , B60W60/00
摘要: The present disclosure relates to methods, systems, a vehicle and a computer-readable storage medium and a computer program product. The method comprises obtaining a cluster of trained ensembles of machine learning, ML, algorithms. The cluster comprises two or more ML algorithm ensembles, wherein each ML ensemble comprises a plurality of ML algorithms that are trained at least partly based on a first set of training data. The method further comprises obtaining sensor data representative of a scenario, in a surrounding environment of a vehicle observed by at least two sensor devices comprised in a sensor system of the vehicle. The sensor data comprises at least two sensor data sets. The method further comprises providing each obtained sensor data set as input to a corresponding ML algorithm ensemble being comprised in the ML algorithm ensemble cluster. The method further comprises selecting the ensemble-prediction output of at least one ML algorithm ensemble associated with an absent determined discrepancy for generating an annotation for one or more data samples of the sensor data set of at least one ML algorithm ensemble associated with a determined discrepancy.
-
22.
公开(公告)号:EP4343629A1
公开(公告)日:2024-03-27
申请号:EP23154013.9
申请日:2023-01-30
申请人: NavInfo Europe B.V.
IPC分类号: G06N3/0464 , G06N3/082 , G06N3/096 , G06N3/091
摘要: A computer-implemented method for improving generalization in training deep neural networks in online settings. The method comprises a general learning paradigm for sequential data that is referred to as Learn, Unlearn, RElearn (LURE), a dynamic re-initialization method to address the above-mentioned larger problem of generalization of parameterized networks on sequential data by selectively retaining the task-specific connections through the important criteria and re-randomizing the less important parameters at each mega batch of training. The proposed method of selectively forgetting is crucial for retaining previous information all the while improving generalization to unseen samples.
-
公开(公告)号:EP4343628A1
公开(公告)日:2024-03-27
申请号:EP22197131.0
申请日:2022-09-22
发明人: Madhavan, Saravanan , Pai, Sriram , C, Basavaraj
IPC分类号: G06N3/0464 , G06N3/096 , G06N3/09 , G06T7/00 , G06N3/082
摘要: The present invention provides a method and system for validating cleanliness of machine parts in an industrial plant. The method comprises determining, by a processing unit, a neural network which is trained to determine whether dirt is present on one or more machine parts based on the analysis of the one or more images. The method further comprises modifying a plurality of weights of the neural network to generate a plurality of versions of the neural network. The method further comprises receiving an image of a machine part in an industrial plant. The method further comprises determining, by the processing unit, whether dirt is present in the machine part, by application of the optimum version of the neural network model. The method further comprises validating, by the processing unit, a cleanliness of the machine part based on a determination that dirt is not present on the machine part.
-
24.
公开(公告)号:EP4307175A1
公开(公告)日:2024-01-17
申请号:EP23183689.1
申请日:2023-07-05
发明人: Nakamura, Akira
IPC分类号: G06N3/0455 , G06N3/0475 , G06N3/0495 , G06N3/094 , G06N3/096 , G06N3/0464
摘要: An electronic device and method for image component generation based on application of iterative learning on autoencoder model and transformer model is provided. The electronic device fine-tunes, based on first training data including a first set of images, an autoencoder model and a transformer model. The autoencoder model includes an encoder model, a learned codebook, a generator model, and a discriminator model. The electronic device selects a subset of images from the first training data. The electronic device applies the encoder model on the selected subset of images. The electronic device generates second training data including a second set of images, based on the application of the encoder model. The generated second training data corresponds to a quantized latent representation of the selected subset of images. The electronic device pre-trains the autoencoder model to create a next generation of the autoencoder model, based on the generated second training data.
-
公开(公告)号:EP4283529A1
公开(公告)日:2023-11-29
申请号:EP23151976.0
申请日:2023-01-17
发明人: Park, Sungyeon , Shin, Hyunhak , Song, Changho , Noh, Seungin , Lim, Jeongeun
IPC分类号: G06N3/084 , G06N3/096 , G06N3/0464 , G06V10/20 , G06V10/774 , G06V10/82 , G06V10/98 , G06V20/52
摘要: An object recognition model training method in a computing device is disclosed. In the present disclosure, an object of interest, which is an object for object recognition, is designated, and an object of non-interest excluding the object of interest is generated and used as learning data for the object recognition model. In the process of training the object recognition model, when an erroneously detected object occurs, the object recognition model may be retrained by automatically converting the erroneously detected object to the object of non-interest without feedback of the erroneous detection to the user. Accordingly, user convenience for processing the erroneously detected object is improved, which increases reliability of the object recognition model. This disclosure can be associated with artificial intelligence modules, drones (unmanned aerial vehicles (UAVs)), robots, augmented reality (AR) devices, virtual reality (VR) devices, devices related to 5G service, etc.
-
公开(公告)号:EP4254273A1
公开(公告)日:2023-10-04
申请号:EP23153237.5
申请日:2023-01-25
申请人: Fujitsu Limited
发明人: Katoh, Takashi , Uemura, Kento , Yasutomi, Suguru
摘要: A machine learning program that causes at least one computer to execute a process, the process includes estimating a first label distribution of unlabeled training data based on a classification model and an initial value of a label distribution of a transfer target domain, the classification model being trained by using labeled training data which corresponds to a transfer source domain and unlabeled training data which corresponds to the transfer target domain; acquiring a second label distribution based on the labeled training data; acquiring a weight of each label included in the labeled training data and the unlabeled training data based on a difference between the first label distribution and the second label distribution; and re-training the classification model by the labeled training data and the unlabeled training data reflected the weight of each label.
-
27.
公开(公告)号:EP4239528A1
公开(公告)日:2023-09-06
申请号:EP23150576.9
申请日:2023-01-06
发明人: BANDYOPADHYAY, SOMA , BALAKRISHNAN, SRIDHAR , SACHAN, SHRUTI , TADEPALLI, YASASVY , PAL, ARPAN , DATTA, ANISH , LEBURI, KARTHIK , GADEPALLY, SRINVAS RAGHU RAMAN
IPC分类号: G06N3/0455 , G06N3/047 , G06N3/0475 , G06N3/096 , G06N3/088
摘要: Existing machine learning systems require historical data to perform analytics to detect faults in a machine and are unable to detect new types of faults/changes occurring in real time. These systems further fail to identify operation changes due to sensor drift and forget past events that have occurred. Present application provides systems and methods for identifying and classifying sensor drifts and diverse varying operational conditions from continually received sensor data using continual training of variational autoencoders (VAE) following drift specific characteristics, wherein sensor drift is compensated based on identified changes in sensors and degradation in machine(s). Rehearsal technique is performed by either VAE based generative models trained in previous iterations that are configured to generate a dataset corresponding to a current iteration, or discriminative instances of original dataset in previous iterations that are configured to generate a dataset corresponding to a current iteration, thus preventing from catastrophic forgetting.
-
28.
公开(公告)号:EP4198832A1
公开(公告)日:2023-06-21
申请号:EP22213730.9
申请日:2022-12-15
申请人: Tvarit GmbH
发明人: Srivastava, Aditya , Shekhawat, Sanjay , Gupta, Rushil , Kumar, Sachin , Galrani, Kamal , Prajapat, Rahul , Modukuru, Naga Sai Pranay , Agrahari, Rishabh , Barde, Nihal Rajan , Mondal, Arnab Kumar , Prasad, Prathosh Aragola
IPC分类号: G06N3/096 , G06N3/0455 , G06N3/0464 , G06N3/0442 , G06N3/094
摘要: A cross domain generalization system (200) for industrial artificial intelligence (AI) applications is disclosed. A target encoder subsystem (310) obtains target data from a target machine (204) product and generates lower dimensional data for obtained target data using a target artificial intelligence (AI) model. The generated lower dimensional data are corresponding to a plurality of target embeddings data (404). The target encoder subsystem (310) further applies the plurality of target embeddings data (404) into a source classifier AI model. A source classifier subsystem (308) predicts a quality of the target machine (204) product by generating class labels for each of the plurality of target embeddings data (404) based on a result of the classifier AI model. The goal of the present invention is to learn features or representations such that the correlation with a label space is similar both in source and target domains while being invariant of data distributions.
-
公开(公告)号:EP4187446A1
公开(公告)日:2023-05-31
申请号:EP22207594.7
申请日:2022-11-15
申请人: Orange
发明人: LI, Wenbin
摘要: L'invention concerne un procédé d'entraînement d'un réseau de neurones (RN) artificiels pour que ledit réseau de neurones (RN) artificiels identifie une valeur de propriété parmi une pluralité de valeurs de propriétés, chaque propriété pouvant prendre au moins deux valeurs différentes..
Le procédé comprend :
- un entraînement primaire consistant à entraîner (S330) ledit réseau de neurones (RN) à identifier au moins une valeur cible (UNC1) ; et
- un entraînement secondaire pour détecter des faiblesses du modèle entraîné lors de l'entraînement primaire et renforcer ce modèle en (S370) augmentant le taux d'apprentissage de neurones de sortie du réseau qui sont associés à des valeurs de propriétés le plus souvent estimées à tort.-
公开(公告)号:EP4425391A1
公开(公告)日:2024-09-04
申请号:EP23166242.0
申请日:2023-03-31
申请人: Infosys Limited
发明人: GANESAN, Rajeshwari , HONNA, Megha
摘要: This disclosure relates to a method and system for managing knowledge of a primary ML model. The method includes generating a set of class probabilities for an unlabelled dataset based on a labelling function. The unlabelled dataset may be associated with the primary ML model, and the primary ML model may employ a first ML model architecture. Further, the method includes transferring the unlabelled dataset and the associated set of class probabilities for training a secondary ML model based on a knowledge transfer technique. The secondary ML model may employ a second ML model architecture. It should be noted that the first ML model architecture is different from the second ML model architecture.
-
-
-
-
-
-
-
-
-