The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload. A successful PUT of a given
representation would suggest that a subsequent GET on that same
target resource will result in an equivalent representation being
sent in a 200 (OK) response.
Placing the PUT body at a different location and returning a pointer to it is not supported by this definition. Furthermore:
Proper interpretation of a PUT request presumes that the user agent
knows which target resource is desired. A service that selects a
proper URI on behalf of the client, after receiving a state-changing
request, SHOULD be implemented using the POST method rather than PUT.
If the origin server will not make the requested PUT state change to
the target resource and instead wishes to have it applied to a
different resource, such as when the resource has been moved to a
different URI, then the origin server MUST send an appropriate 3xx
(Redirection) response; the user agent MAY then make its own decision
regarding whether or not to redirect the request.
That is, to get the effect you intend, you should either (a) use POST (from which you should return a 201 with the final destination, not 200 like you currently do), or (b) issue a 307 redirect to the final destination of the PUT before accepting any content (and subsequently replying with a 201).
"The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.
".. The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request -- the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource."
RFC2616[1] Fielding, et al.
This is a key part of the spec, and you can see Dr. Fielding's intentions in REST. Specific URI 's referring to a Representation of an entity is a core architectural component of REST[2].
colanderman is correct in his reply. Although RFCs are designed to be very explicit, certain sections are all too often left open to interpretation when vague/ambiguous language is used.
This is not one of those times. It is very clearly defined that PUT, by design, creates a resource at the exact URL provided. The 307 redirect variant mentioned is over-complicating the scenario. The right thing to do is use POST and return 201 Created with a Location header to the created URL.
Just because it works as-is doesn't mean there isn't a better way that strictly follows expected behaviors.
I think the point is to illustrate that you can pipe things into it. There exist programs that work on files, and work much worse or not at all on standard input; this example thus demonstrates more power.
(Though, alluded to below, the "-X PUT" may be totally superfluous.)
"requires you to install something" is relative, depending on what's installed on your system. This example is a continuation of the 'elaborate vs. simpler command' theme of this thread for people who prefer HTTPie.
Haw. I made quite the same service, curl compliant and stuff except it encrypts the file on disk upon reception (creating an AES cipher, piping the file through it while receiving it) and sending back the id of the file and the key that allows decryption of said file. If anyone is interested there's plenty of documentation, even a client that allows to take screenshots and upload them on the fly.
Very cool, though I find it a little odd. If someone is going the extra mile to store the file encrypted, it stands to reason that trusting the service with the initial unencrypted byte stream is a no-go. The file being uploaded over HTTPS, and your assurance that the stream is piped directly through encryption, is of little consequence if you or an infiltrator to your server were to choose to be malicious.
I understand it's impossible to tool the concept to work with a very simple shell one-liner, and without a further dependency like an openssl binary. I like the concept, but I'd rather have to perform a one-time install of openssl or similar and copy a 10-20 line bash script to have true security rather than your current setup that comes with a technically flawed security model.
Obviously you are aware of this, and chose the path of convenience and "good for most uses". Kudos for a clean tool!
Obviously it comes to personal judgment when people use my tools. I mean they could also do as described in transfer.sh, pipe first the data through gpg. The server accepts any kind of data as long as it doesn't overflow the current set size limit.
But indeed I do agree that I can't guarantee or even prove that I can't access the files on my server. That's why I offer a quick and simple way for people to setup this on their own machine, so they can have this lightweight server and use it as if it was mine just by modifying a line in the configuration file of the client (or changing the URL in the curl commands)
I also made a similar service, mostly for my own needs so it's kinda spartan, I find it convenient for grabbing stuff off my clipboard and screenshots/captures:
I recommend just installing cjdns and forwarding the home end. It also encrypts your traffic. You wouldn't want to netcat in cleartext over the internet.
You could always just pipe it through ssh or some other thing. Setting up openvpn or similar is nontrivial - even if you're lucky enough to have a static IP and enough spare time to make it work.
That would be cool, but the devil is in the details. For one thing, does the "upload" wait for at least one "download" to connect? Or does "upload" go to a cache that is re-used for all subsequent downloads? What are the timeouts? E.g. how long would blah.com hold the upload connection open until (at least one) download opened up?
The other thing that makes it less cool is that it's a hack that would be better off not existing. The ideal case is that every device should have a unique ID, such that we can directly "subscribe" to anything on any device without using an intermediary (well, an application level intermediary anyway; you'll always be using link-level intermediaries). Such an ID might be IPv6, in which case such a service is trivially replaced with ordinary every day ssh/scp.
I think it should be as dumb and simple as possible and do nothing deviating from normal unix behavior (ie pipe). So it would cache only one chunk of data sent by A and start streaming to the first download request.
Regarding their tagline: “Easy file sharing from the command line”, this is also possible with Yandex Disk¹ (specifically, the Linux version here²). It also has less limitations than transfer.sh, specifically:
• 10 GB of space for free (with a 10 GB filesize limitation³) over transfer.sh’s 5 GB
• no file expiration date, unlike transfer.sh’s 14-day expiration
Yeah, but the thing is that with transfer.sh you don't have to install any applications on your machine. And if you look at the code, you'll see that the max size isn't being enforced.
chunk.io uses an invalid security certificate. The certificate is only valid for the following names: *.herokuapp.com, herokuapp.com Error code: SSL_ERROR_BAD_CERT_DOMAIN
I've signed up for the mailing list in the hopes of getting an actual contact email for the author/s.
Get users then start to offer premium features? Of the top of my head;
* Increased filesize limits
* Support for custom domains
* Ability to set download limits or kill date per file
* Analytics about file downloads
* Support for hosted version
I always wonder how sites like these deal with stuff like potentially hosting child porn/other content that is illegal to possess. Does DCMA safe harbor stuff cover it? (don't see a DCMA notice on the site)
You can leave out the 'potentially'. I shut down a file sharing service specifically for that reason, it is just about impossible not to become a vector for the transmission of illegal or objectionable content. DMCA does not cover it, because the DMCA is about copyright, not about content that is illegal to possess regardless of how you got it. That said, the police (at least, the police here) is more than happy to work together with you to help keep the CP peddlers out but it's a losing battle, there are a very large number of them and you'll be hard pressed to work 24/7 all year long and since you can upload encrypted files you can't be sure of what's being transferred anyway.
Which is a good thing, after all what is being uploaded is strictly speaking none of my business but it automatically makes you complicit in roughly the same way that running a TOR exit node will make you complicit and I don't want any part of that. I'm perfectly ok with setting up useful services at very little or no fees at all but I refuse to become an unwitting and unwilling partner in other peoples illegal dealings.
We're lucky that some people are fine with becoming an unwilling partner since without them no internet, no phone, no email, no snail mail, no roads, no electricity, no open source software, no nothing. It's a sad fact of life that pretty much every useful technology, service or piece of infrastructure will be used for illegal dealings of some kind.
It's a matter of proportion to me. If a service is used predominantly to facilitate illegality then I see no reason to continue to run it.
The internet, phones, email, regular mail, roads, electricity, software and on on have predominantly good and productive uses. File sharing websites attract percentage wise more bad than good, at least more bad than what I'm comfortable with.
So in the end that's a moral call and in this case my decision was to shut it down because of the types of crimes involved and the number of criminals on the service totally outnumbered the 'good guys' and our ability to deal with the assholes was limited.
It's obvious that you're trolling, but just in case you're not: Eventually you can expect a push against cash in the name of anti-terrorism and anti-crime campaigns. Obviously those will be founded in bull-shit but that's not going to stop it from happening. Whether or not it will succeed is mostly a matter of how it gets sold to the public.
As for how it is possible for all of the available currency to be contaminated with traces of drugs and yet that this does not need to prove that all (or even a majority) of the cash transactions done are drug related, I'm sure you're smart enough to figure that one out for yourself, but evidence such as this will figure prominently in the kind of discussion that will visit us some years into the future.
False analogy. Creating a file upload service with no authentication or no resources
to keep it safe is like installing a fully loaded AK 47 at an intersection
and hoping people would just use it to learn how it works or use in an
emergency to shoot at criminals passing that intersection.
There are free file upload services and there are ways to upload
from command line(1).
This is a general problem with banning things instead of simply regulating them somehow. How would one scientifically study methamphetamine, for example, in a country where it is illegal to even possess it?
Without arguing for or against CP, what if I wanted to look up evidence that use of CP leads to increase or decrease of actual child abuse, without setting off red flags everywhere? As a Psych major, that sort of thing would interest me, for example.
It's actually quite well designed and questions such as these were definitely taken into account during the design phase.
I came up with a similar scheme about 15 years ago (as a result of operating that file sharing service) and proposed to the local LE that we set up a service where a 'fingerprint' of an image could be tested against known bad images, and if an image tested positive it would be flagged for review (and on an exact match it would be automatically banned).
The local law enforcement officer thought it was a great idea but it would never fly because even the hashes of the images were considered off-limits for sharing with others and they'd have to share their database of hashes with me if I were to set this up (free of charge).
Eventually MS came up with PhotoDNA and they're too big to ignore.
Agreed, but given the fact that this feeds into the legal system in a fairly direct way and that I'm only a 'one man shop' by their definition it makes good sense to insist on doing business with a larger party.
Where it goes haywire is that they then have to trust all the employees of that larger party as well but that's logic rather than CYA.
MSFT provides a set of sample images[1], some of which return as if they matched. The sample images are not child pornography, generally pictures of celebrities (which is really strange.)
I know. I helped them beta test that here in nl. Unfortunately it's not just still images. It's also videos, and encrypted still images and encrypted videos. Photodna comes into its own once you actually have an image.
This reminds me of a node.js-based utility I wrote awhile back called filebounce[1], except filebounce streams a file over HTTP/HTTPS and does not store the file on an intermediary server. It also supports tailored curl and GUI browser-based interfaces. I use it as a simple way of copying a file without needing ssh, ftp, samba, etc.
Love the idea. But the link always leads to a page with the file on it, not the file itself. Specially for picture sharing that's not really acceptable for me.
I think the newness about this is that they've released the code on github under MIT, so you can self-host the service if you want. I think none of the othe rprojects mentioned here have are opensource or have a slef-hosted version.
I wrote a sprunge clone with a gimmick (the URL contains the sha256 digest of the paste) to learn Go. Ships with a Dockerfile. Probably doesn't scale. Affero GPL.
I might be unusual here, but I typically get stuck when trying to copy files off a remote Windows machine. Most recently was an aged Windows server with a version of internet explorer so old that all the modern file upload sites (e.g. https://file.io/) seemed to have JavaScript issues. Windows built in WebDAV support didn't work, because it's not installed on Windows server I think.
Can anyone come up with a way to use a service like this from an old version of Windows that lacks modern web browser, powershell and so on?
Why depend on a centralized service when there are easy-to-use decentralized alternatives?
$ ipfs add myfile.txt
added QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ myfile.txt
# Access via a public gateway
$ curl https://ipfs.io/ipfs/QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ
# or via a local node
$ curl http://127.0.0.1:8080/ipfs/QmeeLUVdiSTTKQqhWqsffYDtNvvvcTfJdotkNyi1KDEJtQ
1. ipfs.io is not a critical part of the system. If it becomes unavailable, the information can still be retrieved via a local node or another public gateway.
2. With a decentralized system like IPFS, availability of the data does not depend on a single provider. Also, there are no artificial limits on data size and storage time.
This isn't for long term storage, it's for ephemeral transfers between two well defined entities. Like if I, as developer A, need to give a large binary blob to another developer B. If we are not in close enough proximity for physical media transfer, something like this might be the next quickest option. There's no need to go through the work of distributed it through the ipfs network.
People who doesn't even know what a text shell is can just drag a file icon onto the page to upload. No accounts or passwords are involved at either end, so it's about as frictionless as possible.
Its stdin will be the file instead of a pipe, but few programs care about the difference and fewer still in such a way that you'd prefer to use the pipe.
It doesn't appear so, no. The raw bytes seem to be dumped to temp files, local storage, or S3 [1][2] without mention of any sort of encryption step (or reading of a secret somewhere). As mentioned below, you could encrypt before uploading of course. Someone please correct me if I misread though.
Using lisp with drakma, the following function save under name some content.
* (defun save(name content)
(multiple-value-bind (a b c)
(http-request (format nil "https://transfer.sh/~a" name) :method :put :content (write-to-string content))
(list a b c)))
Just drop it on one of the folders in your Dropbox that's also served by IIS on one of your boxen. Then simply link from the appropriate subdomain on your personal domain. Been doing this for 15 years, dunno why nobody else does.
If you can get that script to run on a user's machine, they're hosed whether or not this service exists. Curl or mail to an endpoint in the attacker's control would be equally effective.
Unfortunately, it is not practical to run this at home, because of asymmetric up/download speeds of my internet provider (upload is 20x slower than download).
There are a lot of sites like this. The only thing that's sorta neat about this file is that the filesize limit is so high. Getting around the size limit is fairly easy by using split or something similar so that's not too big of a deal.
Others that come to mind are pomf.se clones like 1339.cf or comfy.moe and nekuneku's current site, uguu.se. Many paste services could also be used to achieve the same result, ix.io and sprunge.us come to mind.
[1] http://tools.ietf.org/html/rfc7231#section-4.3.4