-
公开(公告)号:EP4451230A1
公开(公告)日:2024-10-23
申请号:EP23168202.2
申请日:2023-04-17
申请人: TOYOTA JIDOSHA KABUSHIKI KAISHA , Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V.
IPC分类号: G06V10/774 , G06V10/26
摘要: The present invention relates to a method for training an image segmentation model (ISM) for segmenting images (IMG) of a second type, the model (ISM) having been trained (T100) using a first set ( X s ) of images of a first type and having labels ( Y s ), the method comprising training (T200) the model (ISM) using at least a second set ( X t ) of images of the second type and having weak-labels ( Y t ), the weakly-labeled images comprising unlabeled pixels, wherein said training (T200) is performed using a loss function ( ) based on: similarity measures ( f t i ⋅ η t k ) between class prototypes ( η t k ) and features ( f t i ) of the images of the second set ( X t ); and on the weak-labels ( y t i k ) of the images of the second set ( X t ).
-
公开(公告)号:EP4120132A1
公开(公告)日:2023-01-18
申请号:EP21185912.9
申请日:2021-07-15
发明人: OLMEDA REINO, Daniel , REZAEIANARAN, Farzaneh , SHETTY, Rakshith , ZHANG, Shanshan , SCHIELE, Bernt
IPC分类号: G06K9/62
摘要: A computer-implemented method for training a detection model according to unsupervised domain adaptation approach, said method comprising a set of steps performed for each image of at least one pair of images, the images of a pair respectively belonging to a source domain and a target domain. Said set of steps associated with an image comprises:
- obtaining (E10) one or more object proposals and feature vectors for said image,
- clustering (E20) the obtained object proposals by executing a clustering algorithm,
- determining (E30), for each obtained cluster, a quantity representative of the feature vectors respectively associated with the object proposals belonging to said cluster.
The method also comprises a step of learning (E40) a domain discriminator using adversarial training, so as to align between the source and target domains the quantities determined for each pair.-
公开(公告)号:EP4099226A1
公开(公告)日:2022-12-07
申请号:EP21176983.1
申请日:2021-05-31
摘要: A training method for training a CVAE to calculate a prediction Y relative to future properties of agent(s) Ai, and a prediction method to calculate such predictions using the CVAE.
The CVAE comprises a posterior (Po), a prior (Pr) and a decoder (D).
Each of the posterior (Po) and the prior (Pr) comprises an encoder (PoE), a sampler (PoS) and an attention mechanism (PoAM,PrAM).
The encoders (PoE,PrE) calculate parameters of conditional distributions of intermediate variables (PoW,PrW), based on past trajectories of the agents.
The attention mechanisms (PoAM,PrAM) output values of the latent space variable Z based on the drawn value of the intermediate variable (PoW,PrW).
The decoder (D) calculates predictions Y based on a value of the latent space variable Z.
Both the posterior distribution q φ (Z|X,Y) and the prior distribution p θ (Z|X) are joint distributions based on the past observations of the agents.
Computer program(s), readable medium, prediction system and method.-
公开(公告)号:EP4455994A1
公开(公告)日:2024-10-30
申请号:EP23170809.0
申请日:2023-04-28
发明人: LI, Zhi , OLMEDA REINO, Daniel , SHI, Shaoshuai , SCHIELE, Bernt , DAI, Dengxin
摘要: A computer-implemented method for test-time domain adaptation of a first depth estimation model pre-trained on source data, the method comprising, for each target image of a set of target data, the steps of: (S10) aligning a scale of the target image to a scale of the source data, thereby generating an aligned image; and (S20) using the first depth estimation model to generate a depth map prediction for the aligned image.
-
公开(公告)号:EP4099213A1
公开(公告)日:2022-12-07
申请号:EP21176982.3
申请日:2021-05-31
IPC分类号: G06K9/00
摘要: A method for training a convolutional neural network to deliver an identifier of a person visible on an image when the image is inputted to the convolutional neural network, the method comprising:
a. obtaining a training dataset including images of a person and skeleton representations,
b. obtaining feature maps from the convolutional neural network,
c. extracting features using the skeleton representations,
d. forming graphs using the features and processing them in a graph convolutional neural network,
e. calculating a loss,
f. jointly training the two neural networks.
-
-
-
-