lifelong_methods package¶
Subpackages¶
- lifelong_methods.buffer package
- lifelong_methods.methods package
- Submodules
- lifelong_methods.methods.agem module
- lifelong_methods.methods.base_method module
- lifelong_methods.methods.finetune module
- lifelong_methods.methods.icarl module
- lifelong_methods.methods.icarl_cnn module
- lifelong_methods.methods.icarl_norm module
- lifelong_methods.methods.lucir module
- lifelong_methods.methods.mask_seen_classes module
- Module contents
- lifelong_methods.models package
Submodules¶
lifelong_methods.utils module¶
-
class
lifelong_methods.utils.
SubsetSampler
(indices)¶ Bases:
torch.utils.data.sampler.Sampler
Samples elements in order from a given list of indices, without replacement.
- Parameters
indices (sequence) – a sequence of indices
-
lifelong_methods.utils.
get_optimizer
(model_parameters: Iterator[torch.nn.parameter.Parameter], optimizer_type: str = 'momentum', lr: float = 0.01, lr_gamma: float = 1.0, lr_schedule: Optional[List[int]] = None, reduce_lr_on_plateau: bool = False, weight_decay: float = 0.0001) → Tuple[torch.optim.optimizer.Optimizer, Union[torch.optim.lr_scheduler.MultiStepLR, torch.optim.lr_scheduler.ReduceLROnPlateau, torch.optim.lr_scheduler.LambdaLR]]¶ A method that returns the optimizer and scheduler to be used
- Parameters
model_parameters (Iterator[nn.parameter.Parameter]) – the list of model parameters
optimizer_type (string) – the optimizer type to be used (currently only “momentum” and “adam” are supported)
lr (float) – The initial learning rate for each task
lr_gamma (float) – The multiplicative factor for learning rate decay at the epochs specified
lr_schedule (Optional[List[int]]) – the epochs per task at which to multiply the current learning rate by lr_gamma (resets after each task)
reduce_lr_on_plateau (bool) – reduce the lr on plateau based on the validation performance metric. If set to True, the lr_schedule is ignored
weight_decay (float) – The weight decay multiplier
- Returns
optimizer (optim.Optimizer): scheduler (Union[MultiStepLR, ReduceLROnPlateau, LambdaLR]):
- Return type
Tuple[optim.Optimizer, Union[MultiStepLR, ReduceLROnPlateau, LambdaLR]]
-
lifelong_methods.utils.
labels_index_to_one_hot
(labels: torch.Tensor, length: int) → torch.Tensor¶
-
lifelong_methods.utils.
l_distance
(input_vectors: torch.Tensor, ref_vectors: torch.Tensor, p: Optional[Union[float, str]] = 2) → torch.Tensor¶
-
lifelong_methods.utils.
contrastive_distance_loss
(input_vectors: torch.Tensor, ref_vectors: torch.Tensor, labels_one_hot: torch.Tensor, p: Optional[Union[float, str]] = 2, temperature: float = 1.0) → Tuple[torch.Tensor, torch.Tensor]¶
-
lifelong_methods.utils.
triplet_margin_loss
(input_vectors: torch.Tensor, ref_vectors: torch.Tensor, labels_one_hot: torch.Tensor, p: Optional[Union[float, str]] = 2, base_margin: float = 1) → Tuple[torch.Tensor, torch.Tensor]¶
-
lifelong_methods.utils.
get_gradient
(model: torch.nn.modules.module.Module) → torch.Tensor¶ Get current gradients of a PyTorch model.
This collects ALL GRADIENTS of the model in a SINGLE VECTOR.
-
lifelong_methods.utils.
update_gradient
(model: torch.nn.modules.module.Module, new_grad: torch.Tensor) → None¶ Overwrite current gradient values in Pytorch model. This expects a SINGLE VECTOR containing all corresponding gradients for the model. This means that the number of elements of the vector must match the number of gradients in the model.
-
lifelong_methods.utils.
transform_labels_names_to_vector
(labels_names: Iterable[str], num_seen_classes: int, class_names_to_idx: Dict[str, int]) → torch.Tensor¶
-
lifelong_methods.utils.
copy_freeze
(model: torch.nn.modules.module.Module) → torch.nn.modules.module.Module¶ Create a copy of the model, with all the parameters frozer (requires_grad set to False)
- Parameters
model (nn.Module) – The model that needs to be copied
- Returns
The frozen model
- Return type
nn.Module
-
lifelong_methods.utils.
save_model
(save_file: str, config: Dict, metadata: Dict, model: Optional[BaseMethod] = None, buffer: Optional[BufferBase] = None, datasets: Optional[Dict[str, iirc.lifelong_dataset.torch_dataset.Dataset]] = None, **kwargs) → None¶ Saves the experiment configuration and the state dicts of the model, buffer, datasets, plus any additional data
- Parameters
save_file (str) – The checkpointing file path
config (Dict) – The config of the experiment
metadata (Dict) – The metadata of the experiment
model (Optional[BaseMethod]) – The Method object (subclass of BaseMethod) for which the state dict should be saved (Default: None)
buffer (Optional[BufferBase]) – The buffer object for which the state dict should be saved (Default: None)
datasets (Optional[Dict[str, Dataset]]) – The different dataset splits for which the state dict should be saved (Default: None)
**kwargs – Any additional key value pairs that need to be saved
-
lifelong_methods.utils.
load_model
(checkpoint: Dict[str, Dict], model: Optional[BaseMethod] = None, buffer: Optional[BufferBase] = None, datasets: Optional[Dict[str, iirc.lifelong_dataset.torch_dataset.Dataset]] = None) → None¶ Loads the state dicts of the model, buffer and datasets
- Parameters
checkpoint (Dict[str, Dict]) – A dictionary of the state dictionaries
model (Optional[BaseMethod]) – The Method object (subclass of BaseMethod) for which the state dict should be updated (Default: None)
buffer (Optional[BufferBase]) – The buffer object for which the state dict should be updated (Default: None)
datasets (Optional[Dict[str, Dataset]]) – The different dataset splits for which the state dict should be updated (Default: None)