Quantity of datasets doesn’t seem like the right metric. The library just needs the datasets you care about and both libraries have the popular ones. What’s more important is integration and if you’re training custom TF models then tfds will generally integrate more smoothly than huggingface.
I tried Librispeech, a very common dataset for speech recognition, in both HF and TFDS.
TFDS performed extremely bad.
First it failed because the official hosting server only allows 5 simultaneous connections, and TFDS totally ignored that and makes up to 50 simultaneous downloads and that breaks. I wonder if anyone actually tested this?
Then you need to have some computer with 30GB to do the preparation, which might fail on your computer. This is where I stopped. https://github.com/tensorflow/datasets/issues/3887. It might be fixed now but it took them 8 months to respond to my issue.
On HF, it just worked. There was a smaller issue in how the dataset was split up but that is fixed now, and their response was very fast and great.
In this case, I don't see UIs, i see HF have a 1GB+ dataset that is curated.
TF, in contrast, has 10 glue datasets varying from KBs! to 100s of MBs in size.
To each their own. I like that TF separates them since they are separate tasks and combining them is only one use case. At the end of the day we should just use what works best. The ML landscape is far from settled.
Great resource. My experience has been that any data project is at least 1/3 data collection/preparation, 1/3 using the right tool the right way, and 1/3 asking the right questions and interpreting the outcome.
For computer vision, there are 100k+ open source classification, object detection, and segmentation datasets available on Roboflow Universe: https://universe.roboflow.com
So many of those have tiny datasets - like 30 images that are seemingly of low quality. I love roboflow, but those are really hard to work with. I wish there was an open platform for generating the datasets that was cost effective.