mednet.models.classify.mlp¶
Multi-layer perceptron model for multi-class classification.
Classes
|
Multi-layer perceptron model for multi-class classification. |
- class mednet.models.classify.mlp.MultiLayerPerceptron(loss_type=<class 'torch.nn.modules.loss.BCEWithLogitsLoss'>, loss_arguments=None, optimizer_type=<class 'torch.optim.adam.Adam'>, optimizer_arguments={'lr': 0.01}, num_classes=1, input_size=14, hidden_size=10)[source]¶
Bases:
Model
Multi-layer perceptron model for multi-class classification.
This implementation has a variable number of inputs, one single hidden layer with a variable number of hidden neurons, and can be used for binary or multi-class classification.
- Parameters:
The loss to be used for training and evaluation.
Warning
The loss should be set to always return batch averages (as opposed to the batch sum), as our logging system expects it so.
loss_arguments (
dict
[str
,Any
] |None
) – Arguments to the loss.optimizer_type (
type
[Optimizer
]) – The type of optimizer to use for training.optimizer_arguments (
dict
[str
,Any
] |None
) – Arguments to the optimizer afterparams
.num_classes (
int
) – Number of outputs (classes) for this model.input_size (
int
) – The number of inputs this classifer shall process.hidden_size (
int
) – The number of neurons on the single hidden layer.
- property num_classes: int¶
Number of outputs (classes) for this model.
- Returns:
The number of outputs supported by this model.
- Return type:
- on_load_checkpoint(checkpoint)[source]¶
Perform actions during model loading (called by lightning).
If you saved something with on_save_checkpoint() this is your chance to restore this.
- Parameters:
checkpoint (
MutableMapping
[str
,Any
]) – The loaded checkpoint.- Return type:
- forward(x)[source]¶
Same as
torch.nn.Module.forward()
.- Parameters:
*args – Whatever you decide to pass into the forward method.
**kwargs – Keyword arguments are also possible.
- Returns:
Your model’s output