1. This is awesome. Seriously, I will use this the next time I'm prototyping an image or text model.
2. Is there a way to entirely disable telemetry? I could imagine wanting to use this for models within a company that use sensitive data, etc., but wouldn't want to or be able to use it if the model interface was published either to gradio's internal APIs or to some sharable link that wasn't on a corporate network.
2. By default, we only create a localhost link, no share link is created (with the exception of colab as we cannot access localhost there). You can then use that to set up port forwarding etc. Let me know if this answers your question?
This is cool, I am playing around with ML myself, and was just looking for a lightweight way to get dynamic input to models. The minst handwriting demo in the readme is almost exactly what I wanted.
I have no idea what the architecture is, but it would be really cool to have this dynamic interface play nice with VSCode's notebook (jupyter) functionality, so I can add dynamic inputs right inline with the rest of the code.
Thanks! We render inline on jupyter/colab. I'm not sure about VSCode, but it boils down to whether it can render iframe or not. Try saving one of the colabs (as a .ipynb) from https://gradio.app/ml_examples and passing inline=True to launch()?
If you create a small zero it is classified as a 9. Or perhaps it is overly sensitive to line width? Drawing large circles counterclockwise with the small pen seems to be misclassified too.
Yeah, the MNIST model seems to have issues. I can get it to make mistakes for all line widths and most digits. Maybe the issue with small digits is that there is no cropping happening and the model was trained on MNIST where all the digits are uniform size.
Actually this is probably a good demo for a ML visualization tool as we were pretty easily able to find these issues. If cropping is the issue, this is also something that a test set might not catch.
This looks great, another competitor is streamlit. You guys need a data grid.
Most the Internet tutorials, research successes in Deep Learning are in NLP and Computer Vision. But most of the times in an actual setting people will also use a data grid to show results and/or input data.
Don’t know if it’s relevant but optional interface to use also existing Jupyter components e.g. qgrid could be a quick workaround to introduce features until native components are developed
It's cool that you can immediately update the output whenever the input changes.
But what if the code inside the function changes? Do you have hot reloading functionality?
Streamlit handles this by re-executing the whole script when detecting any code changes. You can cache certain resources (such as the model), but it's fiddly to get right.
We don't have hot reloading. We avoided caching certain resources because of this exact reason, its a bit finicky. But we're working on a better way to do it.
Streamlit is a great library, especially for creating full, standalone dashboards. Here's where I think Gradio's focus is different:
- Our UI components are optimized for machine learning models. For example, we make it super easy for you to put in a drag-and-drop image upload for your image classification model, and we'll handle the preprocessing to convert the input image to a numpy array of specified shape & the postprocessing to convert the confidences to nice graphical labels. We're trying to eliminate boilerplate preprocessing & postprocessing as much as possible. Just specify the UI components that make sense for your model in 1 line of code, and then launch() to create the interface!
- We are designing our UI components so that you can get more insight into how your model is performing. For example, here we obscure / crop different parts of an image to explore what in the image might be causing the model to predict a cheetah: https://i.imgur.com/t0Inliy.mp4 We are planning on releasing a lot more features specifically focused on model testing & validation, and would love to hear from you if that sounds useful
- Gradio integrates seamlessly with jupyter / colab notebooks, so you can use your existing workflow (I don't believe that's true for streamlit)
I was just digging around earlier today trying to find if an SML language server exists (sadly, the answer is no), and this title got my hopes up just a little bit
Excellent work, OP! It'd be great to have one setup for gpt-n demos. Something like a textfield where user types a prompt and a sidebar where sentence suggestions are populated every time users presses tab key. A suggested sentence could be inserted into the textfield by clicking on it in the sidebar.
1. This is awesome. Seriously, I will use this the next time I'm prototyping an image or text model.
2. Is there a way to entirely disable telemetry? I could imagine wanting to use this for models within a company that use sensitive data, etc., but wouldn't want to or be able to use it if the model interface was published either to gradio's internal APIs or to some sharable link that wasn't on a corporate network.