Skip to content

hrrbay/FACIL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What is this repository?

This is a fork of FACIL implementing some QoL changes, which mainly facilitate implementation of new approaches, while keeping the same functionality.

Changes

Implement FACIL as a self-contained package

For easier access of variables previously defined in main_incremental.py and easier relative imports, code is changed such that everything is contained in a package called facil. Therefore, the folder structure is changed to

src/
    facil/
        approach/
        __init__.py
        ...
    main_incremental.py

where the previous code of main_incremental.py is moved into facil.py. Many variables (e.g. loaders, args) are then easily accessible within the whole package, i.e., from any newly added file via facil.x. While this may not always be desirable, it can be particularly useful when implementing new approaches and the need to access some variables (otherwise not needed) arises to perform debugging, sanity checking or similar.

Approach constructors and arguments

Up until now, adding new arguments to an approach was somewhat tedious due to the redundancy of defining default values in extra_parser and __init__. This is also suspect to confusion: which default value is actually used?

All default arguments are therefore set in the super-constructor (i.e. Inc_Learning_Appr:__init__) in this version. Additionally, all arguments to a sub-class are passed as double asterisk keyword-arguments. This fully removes the requirement of manually adding approach-arguments to its corresponding constructor as defaults (including member-names) are derived from its extra_parser-function. Therefore, we now only need to define new parameters in extra_parser, they will be set automatically in the super-constructor!

NOTE: This change led to some renaming of used members as they are now exactly the same as generated by the parser.

This is implemented as follows:

# Inc_Learning_appr (super-class)
def __init__(self, model, device, nepochs, lr, lr_min, lr_factor, lr_patience, clipgrad,
                 momentum, wd, multi_softmax, wu_nepochs, wu_lr_factor, fix_bn,
                 eval_on_train, exemplars_dataset: ExemplarsDataset, **appr_kwargs):
    ... # base-args are set here

    # set approach-specific default arguments
    default_args = self.extra_parser(None)[0].__dict__
    for k, v in default_args.items():
        setattr(self, k, v)

    # overwrite approach-specific defaults with passed kwargs
    for k, v in appr_kwargs.items():
        assert k in default_args.keys(), f'Argument "{k}" is not defined in {self.__class__.__name__}.extra_parser!'
        setattr(self, k, v)

# sub-class (e.g. dmc)
def __init__(self, model, device, **kwargs):
    super(Appr, self).__init__(model, device, **kwargs)
    # all members corresponding to destination-names of arguments in `extra_parser` are already set now!

    self.model_old = None
    self.model_new = None
    
    # get dataloader for auxiliar dataset
    aux_trn_ldr, _, aux_val_ldr, _ = get_loaders([self.aux_dataset], num_tasks=1, nc_first_task=None, validation=0,
                                                  batch_size=self.aux_batch_size, num_workers=4, pin_memory=False)
    self.aux_trn_loader = aux_trn_ldr[0]
    self.aux_val_loader = aux_val_ldr[0]
    # Since an auxiliary dataset is available, using exemplars could be redundant
    have_exemplars = self.exemplars_dataset.max_num_exemplars + self.exemplars_dataset.max_num_exemplars_per_class
    assert (have_exemplars == 0), 'Warning: DMC does not use exemplars. Comment this line to force it.'

# sub-class `extra_parser` (e.g. dmc) -- new arguments only need to be defined here
@staticmethod
def extra_parser(args):
    """Returns a parser containing the approach specific parameters"""
    parser = ArgumentParser()
    # Sec. 4.2.1 "We use ImageNet32x32 dataset as the source for auxiliary data in the model consolidation stage."
    parser.add_argument('--aux-dataset', default='imagenet_32_reduced', type=str, required=False,
                        help='Auxiliary dataset (default=%(default)s)')
    parser.add_argument('--aux-batch-size', default=128, type=int, required=False,
                        help='Batch size for auxiliary dataset (default=%(default)s)')
    return parser.parse_known_args(args)

Adding a new approach

When adding a new approach, members given by arguments in extra_parser do not need to be set manually in the constructor. This will be done automatically by the super-constructor. If you anyway want to set your own members using the passed arguments, you should use the members (corresponding to destination-names generated by argparse) set by the super constructor. Check approach/lucir.py for an example.

Logging

In the original version, sys.stdout is redirected in order to log all calls to print. However, this breaks interactive mode of pdb (i.e. arrow up, moving cursor, etc.). Therefore only sys.stderr is still redirected in this version and all previous print-calls are replaced with logger.log_print, which writes to the log-file and also mimics print:

def log_print(self, *objects, print_=True, sep=' ', end='\n'):
    """Log to file with optional print to stdout.

    This function mimics the behaviour of `print` by writing `objects` to `self.log_file`, seperated by `sep`.
    if `print_=True`, also calls `print` with the corresponding arguments. 

    Parameters
    ----------
    print_ : bool, optional
        if True, call `print(objects, sep=sep)`, by default True
    sep : str, optional
        Same functionality as the `sep`-argument of `print`, by default ' '
    """
    if print_:
        print(*objects, sep=sep, end=end)

    self.log_file.write('{}{}'.format(sep.join([str(obj) for obj in objects]), end))
    self.log_file.flush()

Note: This change means print-calls are not logged anymore. Use facil.logger.log_print instead if you want to print and log at the same time.

Logging of gridsearch-arguments

Gridsearch-arguments have not been logged to ` so far.

Learning-rate list

As it can be desirable to use different learning-rates for different tasks, the argument lr is changed to be a list of floats, each item referring to the learning-rate of the task corresponding to the index of the item. The list will be stripped or filled with the last learning-rate to the number of tasks given.

Sanity checking

You can sanity check this fork agains the original version by running scripts/sanity_check_package.sh. The script compares results to original FACIL for different experiments.

About

Fork of FACIL with some QoL improvements.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 97.1%
  • Shell 2.9%