Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Documentation & Typos #1629

Draft
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

jneuendorf
Copy link

@jneuendorf jneuendorf commented Mar 21, 2024

Still PR is still in progress and not "clean for review" - just some things, I stumbled across. I opened it already, so you know there is some progress on certain aspects that others don't have to worry about 😉

Relates to #886

@AntonioCarta
Copy link
Collaborator

Thank you! ping me once you are ready for a review.

@jneuendorf jneuendorf marked this pull request as draft March 23, 2024 13:37
@jneuendorf
Copy link
Author

jneuendorf commented Mar 23, 2024

@AntonioCarta Thanks for the quick reply!

I have a question not directly related to documentation: In a codebase using a past avalanche version there is this import from avalanche.benchmarks.utils import make_classification_dataset. From what I could see in the git history, this function was deprecated and no longer exists in avalanche. What is the equivalent in the current version (0.5.0)?

from avalanche.benchmarks.utils import as_classification_dataset seems to be the successor. However, it's not quite clear to me how it can be used equivalently. _init_transform_groups was used in make_classification_dataset and is still a "private" function in the current code, but there is no public function that uses the private one (only private ones, e.g _make_taskaware_classification_dataset). So the question is:

What's the reason that only transform_groups are passed to as_classification_dataset instead of all the kwargs of make_classification_dataset? Would you be fine with reintroducing the former kwargs and calling _init_transform_groups? Otherwise, I think the best alternative is to make _init_transform_groups public so users can call

as_classification_dataset(
  dataset,
  transform_groups=init_transform_groups(
    target_transform=lambda x: x,  # or whatever
    ...
  ),
)

This could also be achieved via a class method (the constructor seems a bit complex already), e.g.

as_classification_dataset(
  dataset,
  transform_groups=TransformGroups.create(
    target_transform=lambda x: x,  # or whatever
    ...
  ),
)

Kind regards 🙂

PS: Providing a dict to as_classification_dataset as in the docs does not comply to the TransformGroup type.

@AntonioCarta
Copy link
Collaborator

Thank you for highlighting these issues. The methods changed over time because at first AvalancheDataset only supported classification datasets. Now, a classification dataset is an AvalancheDataset where:

  • the data returns triplets <x, y, t>
  • it has a targets DataAttribute
  • it has a targets_task_labels DataAttribute

Therefore, the goal of as_classification_dataset should be to make sure that the data has the correct attributes that are expected from classification datasets. Transformation groups in theory should be defined before this step. But I agree that it may be useful to define them at this stage as long as it's not too complex or confusing.

Would you be fine with reintroducing the former kwargs and calling _init_transform_groups? Otherwise, I think the best alternative is to make _init_transform_groups public so users can call

I think it makes sense to add the arguments to as_classification_dataset. Transformation groups are a bit verbose to create from scratch.

Otherwise, I think the best alternative is to make _init_transform_groups

I like this less. Classification transform groups are different from other transformations and TransformGroups tries to be agnostic to these differences (as much as possible). Also, It would not solve the verbosity problem.

PS: Providing a dict to as_classification_dataset as in the docs does not comply to the TransformGroup type.

This is probably a mistake from the earlier API version.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 8382229381

Details

  • 1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 51.806%

Totals Coverage Status
Change from base Build 8098020118: 0.0%
Covered Lines: 14756
Relevant Lines: 28483

💛 - Coveralls

@AntonioCarta
Copy link
Collaborator

btw, if you have other comments about usability/pain point in the API/documentation do let me know. Fixing these issues takes a lot of time so we can't do everything quickly, but we try to keep track of it and improve over time.

@AntonioCarta
Copy link
Collaborator

Hi @jneuendorf I see that the PR is still a draft but if there are no blocking issue I think we could merge it. Doc improvements can be easily split into multiple PRs.

@jneuendorf
Copy link
Author

Hi @jneuendorf I see that the PR is still a draft but if there are no blocking issue I think we could merge it. Doc improvements can be easily split into multiple PRs.

Hi, your Github-comment email got lost between the other ones from Github like coverall etc. 🙈 You're correct in that some things could already be merged because later amendments could go into another PR.

So feel free to review/cherry-pick the changes and decide which ones are ok for you. 😉

@jneuendorf
Copy link
Author

jneuendorf commented May 23, 2024

EDIT

Following the hint at How to Create an AvalancheDataset, I got it working like this:

from torch import Tensor
from avalanche.benchmarks.utils import _make_taskaware_tensor_classification_dataset

def generate_class(
    label: int | float,
    n_samples: int = 10,
) -> tuple[Tensor, Tensor]:
    ...

class0_x, class0_y = generate_class(label=0)
class1_x, class1_y = generate_class(label=1)

dataset = _make_taskaware_tensor_classification_dataset(
    torch.concat([class0_x, class1_x]),
    torch.concat([class0_y, class1_y]),
)

print('targets', dataset.targets)
# FlatData (len=20,subset=True,cat=False,cf=True)
# 	list (len=20)

The statement

make_tensor_classification_dataset for tensor datasets all of these will automatically create the targets and targets_task_labels attributes.

guided me into the right direction. However, it was not obvious

  1. that one should use a "private" helper function
  2. that subclassing a TensorDataset with a targets property like below is not equivalent to using a benchmark utility.

I think it would be helpful to extend the documentation with examples for creating datasets from typical sources:

  1. from tensors
  2. from (x, y) tuples
  3. from torchvision datasets
  4. from custom datasets

I guess, some useful information can be found at Benchmarks, so, maybe it can be linked from "How to Create an AvalancheDataset".

Depending on what use cases are reasonable and Avalanche is willing to implement, there could be different convenience utilities - this seems to be the current approach. Maybe all of the dataset/benchmark creation utilities can be boiled down to (1) "from tensors" (I admin, I haven't fully understood all the nomenclature of the different dataset and scenario types, and experience attributes, e.g. supervised vs. detection vs. supervised-detection or task labels vs. class labels).


I currently have a problem with training a benchmark from a custom dataset:

My custom dataset is a PyTorch TensorDataset that implements ISupportedClassificationDataset, i.e. the targets method.

class ClassificationMixin(torch.utils.data.Dataset, ISupportedClassificationDataset):
    @cached_property
    def targets(self):
        return []  # whatever...

class TensorDataset(torch.utils.data.TensorDataset, ClassificationMixin):
    ...

This way, it (and its modified splits) is compatible with as_classification_dataset(...). However, when I try to train the following benchmark

benchmark = class_incremental_benchmark(
    {
        'train': as_classification_dataset(train_dataset),
        'test': as_classification_dataset(test_dataset),
    },
    num_experiences=1,
)

I get the following error:

...
    cl_strategy.train(experience)
  File "avalanche/avalanche/training/templates/base_sgd.py", line 211, in train
    super().train(experiences, eval_streams, **kwargs)
  File "avalanche/avalanche/training/templates/base.py", line 162, in train
    self._before_training_exp(**kwargs)
  File "avalanche/avalanche/training/templates/base_sgd.py", line 291, in _before_training_exp
    self.make_train_dataloader(**kwargs)
  File "avalanche/avalanche/training/templates/base_sgd.py", line 456, in make_train_dataloader
    self.dataloader = TaskBalancedDataLoader(
                      ^^^^^^^^^^^^^^^^^^^^^^^
  File "avalanche/avalanche/benchmarks/utils/data_loader.py", line 404, in __init__
    task_labels_field = getattr(data, "targets_task_labels")
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ClassificationDataset' object has no attribute 'targets_task_labels'

For me, this is a hint that there is some protocol mismatch: class_incremental_benchmark accepts a dataset created by as_classification_dataset but its training doesn't work. So am I correct that

  • either ISupportedClassificationDataset (and/or similar) should also required a targets_task_labels method
  • or the base SGD should not rely on a data loader that depends on targets_task_labels

?

@AntonioCarta
Copy link
Collaborator

I think it would be helpful to extend the documentation with examples for creating datasets from typical sources:

yes, we need to update that notebook. Most of the time doing AvalancheDataset(your_data) should work. You don't need the old benchark builders (from tensors, from files,...) once you converted your data into an AvalancheDataset with appropriate attributes (needed to easily split it).

For me, this is a hint that there is some protocol mismatch: class_incremental_benchmark accepts a dataset created by as_classification_dataset but its training doesn't work.

This is a missing feature. Basically, I implemented the datasets without task labels but many strategies still require them even when not strictly necessary. We have to fix this problem. Early version of datasets always had task labels, which was confusing for some people.

@jneuendorf
Copy link
Author

yes, we need to update that notebook. Most of the time doing AvalancheDataset(your_data) should work. You don't need the old benchark builders (from tensors, from files,...) once you converted your data into an AvalancheDataset with appropriate attributes (needed to easily split it).

Okay. Based on your answer and How to Create an AvalancheDataset I tried using

import torch
from torch.utils.data.dataset import TensorDataset
from avalanche.benchmarks.utils import AvalancheDataset

# Create a dataset of 100 data points described by 22 features + 1 class label
x_data = torch.rand(100, 22)
y_data = torch.randint(0, 5, (100,))

# Create the Dataset
torch_data = TensorDataset(x_data, y_data)

avl_data = AvalancheDataset(
  torch_data,
  data_attributes=[
    DataAttribute(y_data, "targets"),
    DataAttribute(ConstantSequence(0, len(y_data)), "targets_task_labels", use_in_getitem=True),
  ],
)

instead of

# ...
avl_data = _make_taskaware_tensor_classification_dataset(x_data, y_data)

This is "with appropriate attributes" but quite cumbersome and typing does not work because .targets does not exist on AvalancheDataset, for example. Also the logic in _make_taskaware_tensor_classification_dataset is more powerful as it handles more cases.

Suggestion 1: Dataset Hierarchy

So it would be really useful to add a dataset hierarchy to the documentation, i.e. the inheritance chain, along with the required data attributes (and more?).

Suggestion 2: Dataset construction

We should think about whether the structure of Suggestion 1 is ideal. If so, it would be consequent to force the user to use the dataset class for the according scenario, e.g.

TaskAwareClassificationDataset(
  # ...
  task_labels=[1, 2, 3],
)

As with the "old benchmark builders", I would try to hide internal complexity from the user, such as DataAttribute or ConstantSequence.

More advanced benchmark builders could still exist with correct typing (e.g. via @overload) - also as classmethods on AvalancheDataset for nicer imports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants