One of the developers here. We worked three years to make Uppy the best open source file uploader the world has seen. Looking forward to your brutally honest feedback! Happy to answer questions too.
Well done. I remember looking at this project more than 2 years ago and thinking Wow, this sounds great for my SaaS. Because you were still essentially in Beta and there was missing functionality at the time, I went ahead with another solution and meant to keep in touch with your progress, but, as it tends to happen, I got distracted.
This post is a timely reminder for me, and I will be taking another look at Uppy with keen interest. Thanks for carrying on the good work and getting to 1.0!
The company behind this is Transloadit. We provide tus & uppy for free, hoping that a fraction of our open source communities will use our hosted versions (of companion and tus) and optionally our commercial encoding service because they work well together.
That could make our public coding efforts worthwhile financially. It’s a bit of a gamble. We’re believers/biased in part because we just enjoy working on oss, and in part because we’re quite a lean company and don’t need to make boatloads of money to make investors happy or anything (we’re bootstrapped since 2009, so no investors, and ramen profitable since 2012. Me and my cofounder are both devs and not in it to get yachts, but rather to have nice and rewarding work, basically)
Whether it really also becomes a financial success remains to be seen. 3y of devtime isn’t cheap so it'll be a bit sad if there is no ROI at all, but we'll comfort ourselves with the thought that we'll be able to provide a better and more reliable uploader to Transloadit's existing encoding customers (because going open has made these projects better thanks to exposure to more minds and environments), and making them very happy, even if there’s 100x more people that don’t make us money.
I am not OP, but in the blog post the author mentions he works for Transloadit, which (after googling it as I wasn't previously familiar with it) is a service which handles lots of different file processing tasks for developers (e.g. image manipulation, audio/video encoding, virus scanning, etc.)
If you are a service selling a product targeted towards developers, I can't think of better marketing than something like this, where tons of users will use it for free, but many users (like me) who hadn't previously heard about your core product will find out about it through this.
EDIT: One thing to add about the marketing angle, another commenter mentioned that this was posted multiple times before reaching the front page, and some of the generic "Awesome! Will definitely try this!" comments here by low-karma users makes me think there is some astro-turfing going on. That said, I don't really mind it. Author created a useful tool, open sourced it, and I'm now glad I know about it. Kudos to him and if it helps more people become aware of his business (which obviously funded his creation of this open source tool) more power to him.
Replied to the GP directly about how we find this, you pretty much nailed it!
As for astroturfing: I am certainly guilty of self-promotion. When this post didn’t get traction, after a few days I thought maybe Show HN finds this interesting. In addition, my team of ~5 will have upvoted this post (although that probably works counter productive with HN's algorithms, I couldn’t/wouldn't stop them). Other than that we don’t deploy any schemes here, what you likely see is that, because we exploded on Reddit over the weekend, its users are posting it to HN too.
Back end dev here, and long-time HN lurker. I've never heard of Uppy but I've just bookmarked it and I've very glad to now know of it.
For the few times that I need to build some frontend, usually in HTML and Javascript, I need tools that do one thing well. Uppy looks like that type of tool. It is not some huge library like jQuery that I need to study for a week to use properly.
It is for finding _actionable_ content like this that I come to HN. Thank you OP for posting.
I know it's not your responsibility at all but the only thing that stopped me from using it was the pain in the ass that is configuring s3 itself. I will get to it eventually cause I'm not very happy with filestack's UI but it was so quick to get it working and I was in a hurry.
If you value time over money, you could also use Uppy in conjunction with Transloadit, our company can handle uploading, encoding, and exporting to S3 for you, too.
If you want to save money then yes, you could configure direct S3 uploading with Uppy, we do have better examples fot that these days (e.g. we now also show back-end code for signing the requests)
However it seems you got it working now so maybe no reason to change?
I just tried integrating this into a hobby project of mine. I like the looks of the dashboard, and direct uploads from device work fine, but I don't understand yet how to get other uploads to work.
Am I supposed to be running my own companion server, or is the server shown in the examples supposed to work for my project (I'm seeing CORS errors)?
The Companion server used in the examples (companion.uppy.io) is really only meant for demo purposes and hence throws CORS errors if you'd try to use it for your own website.
You have three options:
1. Disable Instagram/Dropbox/Google Drive. Then you can use Uppy without any special server components (it'll just upload to your Apache server or S3 bucket)
2. Enable Instagram and friends but run Companion on your own server. It can be installed as middleware into an existing Express server, or run as a Standalone server on a different port than your webserver.
3. Use Transloadit's hosted Companion server. Requires a paid subscription (but also gets you hosted tus servers for upload handling, and our encoding platform, all of which are globally distributed)
Thanks for confirming this! This all makes a lot of sense, but might merit clarifying in your documentation.
When I see sample code, I generally assume that I can copy and paste it onto my own site. So a comment "You can try this code on OUR site, but if you want to use it on YOURS, you need to take care of your own companion hosting" in your sample code would be helpful.
Having read your site, I was certainly aware of all the companion options you enumerated above. However, I did not see your above sentence "companion.uppy.io is really only meant for demo purposes and hence throws CORS errors if you'd try to use it for your own website" (or anything equivalent) anywhere on the site.
I realize that this sentence (while clear, accurate, and quite reasonable) takes a bit of polish to turn into a positive message which won't scare away potential users ;-)
Alternatively, maybe you could set up your demo server to accept requests from randos for exploratory purposes, but with a quota set low enough that it won't be abused for production?
I don’t know Ruby much but Shrine seems great from what I've seen. I’ve been talking to Janko (its author) and it has been a true pleasure to cooperate with him.
You'll hear me say good things about all companies in our space but I'm not so sure what to make of Filestack these days. They run FUD campaigns against the use of open source uploaders https://blog.filestack.com/api/rethinking-open-source-upload... which obviously makes it harder for me to defend them in threads like these.
Great job guys. I want to ask: does it handle Excel upload and basic editing of the Excel file ? The examples on display only show images upload. Do you guys know such library? Like, the one used by Dataiku for Excel upload.
One of the listed features is direct uploads to S3. I haven’t looked, but I’d wager it’s either supported or easy to add uploading to other S3-compatible services!
Whether or not this is possible, it seems like a serious DDOS risk. Some one with malicious intent could give you a huge hosting bill. It might be difficult to apply IP-based rate-limiting without a server.
With S3, your server supplies the client with signed time-limited upload endpoint URLs (which is a feature S3 supports). So the server is effectively authorizing the client to upload direct to S3. You could do this only for "logged in" users, or use your own IP-based rate-limiting, or whatever else you wanted to do. Front-end direct-to-cloud doesn't necessarily mean "without a server", as you can set it up so it has to be authorized by the server, and this is generally how you'd do it.
I don't know if GCS is built into uppy at present (contrary to another comment, I don't believe GCS could be called "S3-compatible"), but I suspect there's a way to use uppy hooks to add it. As long as GCS also allows storage locations that allow upload only to signed time-limited URLs, the same approach could be used.
Where you put the file on the cloud storage and what you do with it is, I believe, not uppy's concern. But if you are for instance using the ruby shrine file attachment library (which is built out with examples to support uppy, and direct-to-S3, as a use case) -- shrine strongly encourages you to use a two stage/two location flow, where (eg) any front-end-uploaded things are in a temporary 'cache' storage, which on S3 you might want to use lifecycle rules to automatically delete things from if older than X. The files might only moved to a more permanent storage on some other event.
Once you get into it, it turns out all the concerns of file handling can get pretty complicated. But having the front-end upload directly to cloud storage can be a pretty great thing, depending your back-end architecture, for preventing any of your actual app 'worker' processes/threads from being taken up handling a file upload, dealing with slow clients, etc. Can make proper sizing and scaling of your back-end a lot more straightforward and resource-limited.
There's two ways to directly upload to S3 buckets (or GCS):
1) You allow append-only access for the world, maybe in combination with an expiry policy. Indeed only useful for a few use cases I'd say
2) You deploy signing of requests, and you only sign for those who are logged in, or otherwise match criteria important to your app. A bit more hassle, and still requires server-side code (whether traditionally hosted or 'serverless'), but at least your servers aren't receiving the actual uploads, taking away a potential SPOF and bottle-neck.
That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.
>That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.
S3 acceleration uses the cloudfront distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. This costs more money though.
These are valid considerations, but not tied to S3 (or uploading).
It's probably a happy problem if you end up worrying about S3 as a very DDoS-able part of your system.
Running up hosting bills is an scenario that can be addressed with various technical means (like a sibling comment explains). Many people seem to judge the risk * probability too small to put a lot of preemptive effort into it. It's basically a question of how much damage would be done until your monitoring catches it. AWS has also been known to "forgive" bills that were caused by malicious attackers in some situations.
Not entirely sure if I read your question right, but Uppy does have a scriptable API so that you could do uploads without bringing in all the UI yes. So you could write your own UI, or have some kind of automation in place.
But maybe you're talking about something else? Happy to dive in deeper
Just curios if this would function as a replacement for sftp uploads. I assume it would need to be put behind our own authentication and https certs. Sftp is very scriptable, but seems hard to explain to many users.
It would be nice if this can function as a point and click upload point and also a scriptable upload point for savvier users.
I think you could support both. If you'd use Transloadit we could export to SFTP. This way your existing integration could be left untouched, but you'd add Uppy+Transloadit+SFTP export as an addition for receiving files from users who can't or don't want to operate an SFTP client.
The announcement post mentions that they have a form input type="file" fallback so you should be able to upload with curl, just like a browser that has javascript disabled. Good enough?
Sorry, no. If you really need it today, you could hook up Uppy to Transloadit, and it could export to Azure. But requires a paid subscription. I am happy to accept a PR for Uppy to enable direct Azure blob storage uploads however! (does this work with signing URLs also?)
I'm afraid my Javascript is rather poor but I'll happily look at what it would take to create a plugin for Azure this weekend. Presumably the S3 multi-part uploader and associated companion server would be a good base to understand the flow?
We would like to have Vue support but we don't have sufficient experience on our team to come up with something really sweet. On the community forum there are examples and posts by the community to make it working (but yes definitely not first class citizen like our React/Native integration). If you have some specific ideas about how it should look, probably weigh in there and maybe with a few enthusiasts we could hammer this out!
It depends (sorry :) what uploader you want me to compare with, there's different differences with different uploaders. I think we tick all the boxes feature wise though. And Uppy is modular so you can easily leave out what you don't like (e.g. just use Core and roll your own UI, or just use Webcam support, or not at all). If you build with e.g. Webpack, that makes Uppy a lot smaller too.
One big differentiator with other open source uploaders is that we go above and beyond to get higher degrees of reliability, to the point of ridiculousness, maybe:
- We use https://tus.io under the hood for resumability to make file uploads survive bad network conditions (train enters a tunnel, you walk to the basement, share something from a club, switch cell towers, walk in range of wifi, have spotty wifi, are in rural areas). Tus is an open standard with many implementations.
- Our 'Golden Retriever' plugin can recover files after a browser crash or accidental navigate-away (full post + video from our hacking trip where we built this: https://uppy.io/blog/2017/07/golden-retriever/)
The reason we obsess over this is that Transloadit (our company) was getting complaints about files not making it to our encoding platform, even though the platform was stable. We realized one out of maybe every thousand uploads just fails due to bad network conditions. Something you don't notice when you either have a very stable connection or don't upload so many files. But it's wild out there, and if you handle 150.000 uploads a day, you can see how many complaints you might end up receiving. So we got a bit frustrated with the state of uploading (downloading had been resumable since HTTP/1.1) and that's how we ended up creating https://tus.io (, and then Uppy)
Uploadcare is a competitor of Uppy's parent company: Transloadit. Fortunately the space is large enough that we don't care for cutthroat mentalities. I think so far both companies has said positive things about each other, and that's a tradition I'd like to uphold! I think their file uploader looks very good and I marvel at their marketing. Their engineering team also looks like they could all be respected co-workers. So a hat-tip to them! I don't think many people will have tried both since Uppy (being recommended for production) is rather new, but I'd also look forward to hearing about experiences with that.
Some differences between the uploader widgets themselves:
- Uppy is open source and can be used with your own back-end. Free as in liberty & pizza
- Uppy is vanilla JS with support for React/Native
- Uppy has resumability (via the open standard https://tus.io) and can recover from browser crashes / accidental navigate-aways. Uploads will just continue where they broke off
- Uppy supports _less_ external sources (e.g. 1.0 comes with support for Dropbox, Instagram, Google Drive, but we don't yet have support for e.g. Facebook, or Google Photos)
Some differences between the companies/back-ends (should you optionally use Uppy+Transloadit to handle the uploading, fetching from e.g. Instagram, and encoding):
- Transloadit offers more encoding features (Uploadcare is making good progress tho, they recently added video encoding for instance). Those features can be combined in workflows. So you leave a JSON recipe ("Template") with us that says: for every video, take out thumbnails, watermark some of those, detect faces in others, store those separately, all in one 'job', or how we like to call it: "Assembly"; because it can be a chain of virtually infinite jobs that take the output of other jobs as input for their own
- Transloadit does _not_ offer a CDN, instead only exports results to storage/buckets you own (not sure if we'll add this)
Just a _few_ differences, there are more. But it seems they are getting fewer and maybe in 5 years we'll be identical companies : ) but yeah so there's some time left until then that we can still afford pleasantries :D
We purposely don't pin that down because there are always borderline cases, but if a post got, say, more than 20 points and a bunch of comments, we'd usually treat reposts as a dupe. But if a story is unusually interesting we sometimes relax that.
The restriction lasts for about a year, as the FAQ explains. Then it's ok for it to appear again.
I check hacker news regularly and this was the first time I saw it. Reposts aren’t inherently bad. I’m glad it was reposted and I’ve now had the chance to learn about it.
There is quite a lot of randomness of what gets to the front page of HN. I'm curious to understand better why some days it doesn't make it but then on one day it does. I believe there is nothing wrong with reposting something, as long as you aren't spammy.
This post was actually pretty interesting to read and I'm glad it finally made it to the front page.
What you likely see here is that, because we exploded on Reddit over the weekend, its users have been posting it to HN too. Those posts aren’t mine/ours.
Same here, haven't seen this before. Sometimes whether a post gets attention or not depends on the time of day it was posted - I'm glad the moderation system allows reposts for some articles.
I don't think they were implying that reposts are bad. I guess they were just listing out links of discussions on previous, similar submissions if we want to check them out too.
All Transloadians only helped out last month, before that it was just the Uppy core team (and contributors). So it's not that many people, and it's not a wrapper either. Well one could argue, it's a wrapper around https://tus.io but we also had to write that, so short answer: no : )
A glorified javascript wrapper around what? Looking over the feature list...none of them are things that are natively supported in ECMAScript or widely-supported browser standards.