salad.datasets.visda packageΒΆ

SubmodulesΒΆ

salad.datasets.visda.detection moduleΒΆ

class salad.datasets.visda.detection.CoordShuffleΒΆ

Bases: object

class salad.datasets.visda.detection.MaxTransformΒΆ

Bases: object

class salad.datasets.visda.detection.VisdaDetectionLoader(root, labels, transforms=None, joint_transforms=None)ΒΆ

Bases: torch.utils.data.dataset.Dataset

id2label = {0: 'aeroplane', 1: 'bicycle', 2: 'bus', 3: 'car', 4: 'horse', 5: 'knife', 6: 'motorcycle', 7: 'person', 8: 'plant', 9: 'skateboard', 10: 'train', 11: 'truck'}ΒΆ
salad.datasets.visda.detection.build_dataset(batch_size, which='train_visda', num_workers=None, encode=True, augment=True, shuffle=True)ΒΆ
salad.datasets.visda.detection.build_validation(batch_size, which='train_visda', num_workers=None, encode=True, augment=True, shuffle=True)ΒΆ
salad.datasets.visda.detection.load_datalist(path)ΒΆ

Load COCO GT

Adapted from https://github.com/VisionLearningGroup/visda-2018-public/blob/master/detection/convert_datalist_gt_to_pkl.py

salad.datasets.visda.openset moduleΒΆ

salad.datasets.visda.utils moduleΒΆ

class salad.datasets.visda.utils.Augmentation(n_samples=1)ΒΆ

Bases: object

class salad.datasets.visda.utils.MultiTransform(transforms, n_samples=1)ΒΆ

Bases: object

salad.datasets.visda.utils.get_balanced_loader(data, **kwargs)ΒΆ
salad.datasets.visda.utils.get_class_counts(data)ΒΆ
salad.datasets.visda.utils.get_unbalanced_loader(data, **kwargs)ΒΆ
salad.datasets.visda.utils.visda_data_loader(path, batch_size, n_src=1, n_tgt=1)ΒΆ
salad.datasets.visda.utils.visda_data_loader_full(path, batch_size, n_src=1, n_tgt=1)ΒΆ
salad.datasets.visda.utils.visda_data_loader_pseudo(path, batch_size, n_aug=1)ΒΆ

salad.datasets.visda.visda moduleΒΆ

salad.datasets.visda.visda.load_dataset(path='./data', im_size=224)ΒΆ