mednet.data.split

Concrete database-split loaders.

Functions

check_database_split_loading(database_split, ...)

For each dataset in the split, check if all data can be correctly loaded using the provided loader function.

make_split(module_name, basename)

Return a database split at the provided module.

Classes

JSONDatabaseSplit(path)

Define a loader that understands a database split (train, test, etc) in JSON format.

class mednet.data.split.JSONDatabaseSplit(path)[source]

Bases: Mapping[str, Sequence[Any]]

Define a loader that understands a database split (train, test, etc) in JSON format.

To create a new database split, you need to provide a JSON formatted dictionary in a file, with contents similar to the following:

{
    "dataset1": [
        [
            "sample1-data1",
            "sample1-data2",
            "sample1-data3",
        ],
        [
            "sample2-data1",
            "sample2-data2",
            "sample2-data3",
        ]
    ],
    "dataset2": [
        [
            "sample42-data1",
            "sample42-data2",
            "sample42-data3",
        ],
    ]
}

Your database split many contain any number of (raw) datasets (dictionary keys). For simplicity, we recommend to format all sample entries similarly so that raw-data-loading is simplified. Use the function check_database_split_loading() to test raw data loading and fine tune the dataset split, or its loading.

Objects of this class behave like a dictionary in which keys are dataset names in the split, and values represent samples data and meta-data. The actual JSON file descriptors are loaded on demand using a py:func:functools.cached_property.

Parameters:

path (Path | Traversable) – Absolute path to a JSON formatted file containing the database split to be recognized by this object.

mednet.data.split.check_database_split_loading(database_split, loader, limit=0)[source]

For each dataset in the split, check if all data can be correctly loaded using the provided loader function.

This function will return the number of errors when loading samples, and will log more detailed information to the logging stream.

Parameters:
  • database_split (Mapping[str, Sequence[Any]]) – A mapping that contains the database split. Each key represents the name of a dataset in the split. Each value is a (potentially complex) object that represents a single sample.

  • loader (RawDataLoader) – A loader object that knows how to handle full-samples.

  • limit (int) – Maximum number of samples to check (in each split/dataset combination) in this dataset. If set to zero, then check everything.

Returns:

Number of errors found.

Return type:

int

mednet.data.split.make_split(module_name, basename)[source]

Return a database split at the provided module.

This function searches for the database split named basename at the directory where module module_name is installed, and returns its instantiated version.

Parameters:
  • module_name (str) – Name of the module where to search for the JSON file. It should be something like foo.bar for a module defined as foo/bar/__init__.py, or foo/bar.py.

  • basename (str) – Name of the .json file containing the split to load.

Return type:

Mapping[str, Sequence[Any]]

Returns:

An instance of a DatabaseSplit.