As a sysadmin I have always hated SMB. I loved samba for their attempts to opensource it so I could at least use samba4 in prod and feel confident in it's stability, but I always felt like it was a bad protocol without many good alternatives. NFS is about the same, even worse when trying to serve to windows clients.
For those on linux or not aware, if you see 'cifs' it is the same thing (mostly).
One thing I always did was require connections to use latest smb (mitigates many old version attacks), ensure auth is using highest available methods, but more than anything is running the SMB server behind a well monitored and maintained firewall. That's the thing that gets lots of companies.
I'm leaning to some sort of automounter and a secure protocol. What that protocol is ... kind of doesn't matter.
If you look at, say, Plan 9 OS, it's got a rather interesting 9P protocol which implements a webfs and ftpfs for local fileshares.
What SMB did was, as with much of Windows stuff, conflate a few things:
* The filesharing protocol was mixed in with the transport protocol. That's not all bad, as you have that on the Unix world as well (NFS, AFS). But it means you can't change one without dealing with the other (the usual argument against monolithic design).
* The mounting and share-discovery mechanism was also baked in. This is where things start bordering on lunacy. In the Unix world, you'd deal with this though an independent mount or automount / autofs system. Which is its own brand of lunacy as well, but....
* Licensing. SMB / CIFS shares carry CALS obligations which I've never been able to comprehend.
There are other bits. Like that if you were to access a CIFS share from, say, a Linux box or Cygwin session, each individual process-based access counts as a share (see CALS above, also the per-host session limit of, generally, 10 sessions).
Other options: FTP (though it's probably rightfully dying), HTTP, HTTPS (now we're looking at something remotely sensible), SSH.
There are FUSE filesystems which will work with each of these, though you've got to specify the hosts. You can automate some of that through autofs (on the Linux / Mac side). WebDAV in theory offers a read/write file-based access over HTTP(S), though in practice it's proved difficult.
Generally, this is a space that's oddly lacking in reasonable and good alternatives.
I'd argue more generally that documents-based filesystems are also exceptionally poor at what they ought really be capable of doing.
The need to open and change files right on remote pc (while it is online) always seemed strange, because companies do not work this way. Documents should be received and sent, not mounted and edited silently. I'm sure that there are thousands of info-sharing web projects, from phpbb-likes to specialized light/heavy solutions. But MS sticked to closed, indiagnosable, randomly slow SMB. Traditional file sharing must be killed as an evolution error.
And you're planning on tightly control access to those files in a large organization... how? You're going to write custom apps to do diffs against every file overwrite on upload? And oh, btw, now you've potentially got 200 different copies of the file on various people's laptops.
There's a reason why SMB is popular - file sharing among large groups of people who need to modify the files with a web server is a complete nightmare.
IMHO, from dealing with it on large corporate client networks, I've never seen SMB solve the problems you mentioned.
Either you have an access problem (which you could solve with authentication + authorization in parent's web-based implementation) or you have a modification problem (which is really a revision control problem that I believe SMB doesn't solve at any version?). Which then leads people to tossing everything into Sharepoint (insert tirade here). Which leads to Microsoft not caring about evolving the SMB spec because it would only cannibalize sales.
So in essence:
SMB cases: "Documents need to be distributed, but not modified by more than one person/team"
Revision cases: "Documents need to be distributed, modified by more than one person/team, and redistributed"
I've seen things like what the parent describes in prosecution with tens of thousands of people work just fine. The leakage to sharepoint has nothing to do with SMB, but with the lack of an easy to use search solution.
Email and Sharepoint give you search and avoid the need for hierarchical filing systems. Search is often less effective, but the clerical people who would maintain those other systems are long gone.
People can work on file garbages, I seen that too many times to say otherwise. But once their company starts being non-trash, these actions form into strict processes, responsibilities and access rights anyway. 200 different copies of a file is a red flag of management.
Not being knowledgeable enough about these protocols and their relative advantage/disadvantages, can the people who downvoted this comment also write down why to help others? That always makes more sense when it's not obvious.
I think this is downvoted because it's considered slow and/or hacky and having a fair amount of overhead (especially for SSH -- e.g. User and Key management).
That being said, I think the explanation is not available because ... people aren't really sure about a better option!
If we take the common (IMO complex) Network filesystem protocol implementations as having irredeemable flaws (so SMB and NFS, all versions), the only viable contenders I can think of are block level network device protocols. E.g.: iSCSI, NBD, DRBD, and probably quite a few others. These of course have the disadvantage of exposing block level protocols to clients, leaving the actual filesystem management up to the client.
Summary: it's always a tradeoff and most of the options suck in one way or another. I personally wouldn't downvote this comment. But maybe someone could enlighten me.
WebDav didn't take off because no-one really felt like giving away a high quality implementation in kernel-space for Windows. And Microsoft viewed it as competition against SMB and their licensing model.
These are terrible alternatives once more than a half dozen people are involved.
SMB isn't about file transfer so much as the system around authorization and authentication. SSH/SFTP gives you highly secure access to unix file systems. There isn't a good interface to advanced ACLs.
Definitely agree, but the grand parent comment wasn't about authorization or authentication. But was about file transfer among a team on a local network. SSH/SFTP gives you that, but as I said it's not always an alternative because of the reasons you mentioned.
Question - wouldn't it be feasible to write a worm that uses the vulnerability to deliver a patch for the vulnerability... and then maybe deactivate itself after several days (the worm / spreading portion)?
Reminds me of a tricky situation I got myself into :
When I was in college, I wrote a simple worm to display a new year greeting on all computers it infects.
Once it infects a computer, it did the following:
1. it replicated itself to as many computers as possible
2. Displayed the greeting (till user acknowledges it through a key press)
3. self delete (in the hope that it will quickly die by itself)
I seeded it in one of the computers in our college network.
I didn't expect it to be so effective; It spread itself very quickly in the entire network.
With self-delete, I thought it would die on its own. I was wrong. Machines kept infecting each other in a perpetual loop.
The only way I could stop it was to write a new version that replicated, and cleaned the first version. This new version kept replicating in the network even after a year. This new version was not doing anything visible to the user, and I was saved :)
You should have included another functionality to check the current date, like, it should stop spreading and just delete itself if it's already in April. ;)
Good thought :)
When I put out the first version, I haven't really understood the consequences or its effectiveness; In fact it had my code name in the greeting :(
When it first appeared, people found it amusing; but it quickly went out of control and kept appearing again and again. This annoyed people. I was in trouble. I had to quickly find a solution in that panic.
Yeah, I read about Morris worm a year later in my networking textbook. It was a good feeling to realize what I have done was similar in nature.
Of course, Morris worm was a real technical exploit (buffer overflow was involved IIRC)
Mine was nowhere comparable to that in sophistication.
Well, you could and I know someone who did just that in the NT 4.0 days, but you are taking one hell of risk. It is still illegal even if your intentions are good and you could truly f'up the machine you are patching. The road to hell is paved with good intentions after all.
Yes, this technique is sometimes used by malicious software in patching the vulnerability that was used to gain control of the host. Security and software companies typically do not want to use benevolent worms to patch vulnerabilities, though. Autonomous code spreading itself and making changes to people's machines without permission is generally frowned upon.
This reminds me of an old story I read in an anthology called "Stealing the Network: How to Own the Box"; it was a series of modern-day hacking short stories loosely tied together through an overarching theme, and each chapter was written by a different professional in the security field. Fun reads, and an interesting introduction to many of the tools and practices of security pros, if a little fanciful at times. :)
The particular chapter I was thinking of was named "The Worm Turns".
The Adylkuzz smb variant - a monero miner which actually came out just before Wcry disables smb1 but clearly didn't seem to help probably because it afaik didn't have a hunter. In theory if it had done then wcry may not have happened as all the base would have been already gone.
I've been wondering since this began, is there any reason as to why someone would open up their file server/shares to the internet? I remember reading somewhere that SMB wasn't designed for WANs and seems like a terrible choice to put on the internet (even without the security risk).
You have a corporate network which has SMB everywhere (as it's windows based). You have 1 of your 10,000 users run "funnyscreensaver.exe", and before you know it you're entire network is infected -- doesn't matter that your firewall blocks incoming or outgoing 137/139/445 - or even if it's an isolated network without even nat capability.
People have been using the internet to share files for a long time, it's one of the most common things people want to do.
If the SMB implementations in Windows and Samba had been done with due attention to security requirements of networked software, it wouldn't be especially risky either.
May I point out that probably nothing was clicked with wcry2.0 that the worm access via smb from internet facing 445 and then internally on lan. Although yes random clicking not advisable ever...
For those on linux or not aware, if you see 'cifs' it is the same thing (mostly).
One thing I always did was require connections to use latest smb (mitigates many old version attacks), ensure auth is using highest available methods, but more than anything is running the SMB server behind a well monitored and maintained firewall. That's the thing that gets lots of companies.
Also, this: https://wiki.archlinux.org/index.php/Samba#Block_certain_fil...
Finally, segment your damn networks with vlans/subnets!