Multi Task Learning On Cityscapes
Metrics
RMSE
mIoU
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | RMSE | mIoU |
---|---|---|
swinmtl-a-shared-architecture-for | 0.51 | 76.41 |
multi-task-learning-as-a-bargaining-game | - | 75.41 |
multi-task-learning-as-multi-objective | - | 66.63 |