mednet.models.classify.model

Definition of base model type for classification tasks.

Classes

Model(name[, loss_type, loss_arguments, ...])

Base model type for classification tasks.

class mednet.models.classify.model.Model(name, loss_type=None, loss_arguments=None, optimizer_type=<class 'torch.optim.adam.Adam'>, optimizer_arguments=None, scheduler_type=None, scheduler_arguments=None, model_transforms=None, augmentation_transforms=None, num_classes=1)[source]

Bases: Model

Base model type for classification tasks.

Parameters:
  • name (str) – Common name to give to models of this type.

  • loss_type (type[Module] | None) –

    The loss to be used for training and evaluation.

    Warning

    The loss should be set to always return batch averages (as opposed to the batch sum), as our logging system expects it so.

  • loss_arguments (dict[str, Any] | None) – Arguments to the loss.

  • optimizer_type (type[Optimizer]) – The type of optimizer to use for training.

  • optimizer_arguments (dict[str, Any] | None) – Arguments to the optimizer after params.

  • scheduler_type (type[LRScheduler] | None) – The type of scheduler to use for training.

  • scheduler_arguments (dict[str, Any] | None) – Arguments to the scheduler after params.

  • model_transforms (Optional[Sequence[Callable[[Tensor], Tensor]]]) – An optional sequence of torch modules containing transforms to be applied on the input before it is fed into the network.

  • augmentation_transforms (Optional[Sequence[Callable[[Tensor], Tensor]]]) – An optional sequence of torch modules containing transforms to be applied on the input before it is fed into the network.

  • num_classes (int) – Number of outputs (classes) for this model.

training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:

def __init__(self):
    super().__init__()
    self.automatic_optimization = False


# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx):
    opt1, opt2 = self.optimizers()

    # do training_step with encoder
    ...
    opt1.step()
    # do training_step with decoder
    ...
    opt2.step()

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.