When we specify model-model, trainer-trainer, model-training combination, e.g. https://github.com/marrlab/DomainLab/blob/master/examples/conf/vlcs_diva_mldg_dial.yaml
It can be there are name collisions of the multiplier $\mu$:
|
self.lambda_ctr = self.aconf.gamma_reg |
-
https://github.com/marrlab/DomainLab/blob/6126ddeb2df0fe3a07de458cac68e4ad02da3c66/domainlab/algos/trainers/train_mldg.py#L111C19-L111C39
-
|
return [loss_dial], [self.aconf.gamma_reg] |
In this case, when i set commandline argument gamma_reg=1.0, it simultaneously set both $\mu$ for MLDG and DIAL to be 1.0. We need different mechanisms to set different values for the two Trainers.
Further importance of this Issue:
The advantage/one of the major feature for DomainLab is that we can handle training multiple loss terms with multiplier weighting each term, to circumsvent setting those multipliers mannually, there are different multiplier schedulers to adapt those multiplier values at each epoch:
This file is the Trainer which could schedue the values of those multipliers at each epoch,
https://github.com/marrlab/DomainLab/blob/master/domainlab/algos/trainers/train_hyper_scheduler.py
Inside the Trainer, a hyper_scheduler has to be specified (In the branch fbopt, we define a fully automatic contoller for scheduling the multipliers, in the main branch, the schedulers are
https://github.com/marrlab/DomainLab/blob/master/domainlab/algos/trainers/hyper_scheduler.py
First, those multipliers (a.k.a. hyperparameters) are initialized
|
self.hyper_scheduler = self.model.hyper_init(scheduler) |
At each epoch, the multipliers (a.k.a. hyperparameters) are updated via the scheduler
|
self.model.hyper_update(epoch, self.hyper_scheduler) |
All $\mu$ need to be registered at the multiplier scheduling
|
trainer=None, beta_d=self.beta_d, beta_y=self.beta_y, beta_x=self.beta_x |
When we specify model-model, trainer-trainer, model-training combination, e.g. https://github.com/marrlab/DomainLab/blob/master/examples/conf/vlcs_diva_mldg_dial.yaml
It can be there are name collisions of the multiplier$\mu$ :
DomainLab/domainlab/algos/trainers/train_matchdg.py
Line 39 in 43d1cda
-https://github.com/marrlab/DomainLab/blob/6126ddeb2df0fe3a07de458cac68e4ad02da3c66/domainlab/algos/trainers/train_mldg.py#L111C19-L111C39
-
DomainLab/domainlab/algos/trainers/train_dial.py
Line 52 in 6126dde
In this case, when i set commandline argument gamma_reg=1.0, it simultaneously set both$\mu$ for MLDG and DIAL to be 1.0. We need different mechanisms to set different values for the two Trainers.
Further importance of this Issue:
The advantage/one of the major feature for DomainLab is that we can handle training multiple loss terms with multiplier weighting each term, to circumsvent setting those multipliers mannually, there are different multiplier schedulers to adapt those multiplier values at each epoch:
This file is the Trainer which could schedue the values of those multipliers at each epoch,
https://github.com/marrlab/DomainLab/blob/master/domainlab/algos/trainers/train_hyper_scheduler.py
Inside the Trainer, a hyper_scheduler has to be specified (In the branch fbopt, we define a fully automatic contoller for scheduling the multipliers, in the main branch, the schedulers are
https://github.com/marrlab/DomainLab/blob/master/domainlab/algos/trainers/hyper_scheduler.py
First, those multipliers (a.k.a. hyperparameters) are initialized
DomainLab/domainlab/algos/trainers/train_hyper_scheduler.py
Line 28 in a77dad8
At each epoch, the multipliers (a.k.a. hyperparameters) are updated via the scheduler
DomainLab/domainlab/algos/trainers/train_hyper_scheduler.py
Line 63 in a77dad8
All$\mu$ need to be registered at the multiplier scheduling
DomainLab/domainlab/models/model_diva.py
Line 112 in 43d1cda