Hacker News new | past | comments | ask | show | jobs | submit login
Baltimore Blew Off Ransomware Demand Only to Find Data Had Never Been Backed Up (techdirt.com)
190 points by miles on Oct 17, 2019 | hide | past | favorite | 114 comments



I remember hearing lots of back and forth on whether they actually used EternalBlue or not, I'm still not sure if that's settled or not, is it?

"Given the fact that $6 million has already been pulled from parks and public utilities funds to "harden" city systems, the $76,000 demand now seems like a bargain."

That doesn't seem fair at all. They'd still have to harden everything and it would still likely cost millions. From the looks of it they'd at least have their stuff back, probably, but they'd still need to put all the same time/money/work in, wouldn't they?


First of all, it's a terrible article, it's full of typos; why aren't we linking to the source article on ArsTecnica?

Secondly, Later on in the article is the more relevant

>The city figures it will cost $18 million to recover from a rejected $76,000 ransom demand. I guess if you're going to play chicken with extortionists, you might want to make sure your backup plans at least meet min spec.

Either way, it's a terrible article and I'm on the fence about flagging it.


on the contrary, i kinda liked this sarcastic writing..

>>The person in charge of the city's systems was Frank Johnson, who went on leave (presumably permanently) after a post-attack audit found the IT director hadn't done much IT directing.


It's RobinHood ransomware, I don't think it was EternalBlue either...

https://www.bleepingcomputer.com/news/security/a-closer-look...


I wonder if, by the time we factor in all the costs of putting everything on a computer and online, securely, it's going to look like it would have been a pretty alright idea to leave most of it the way it had been in the distant past of 30ish years ago.


The techdirt article author suggest they should have just paid the ransom.

Even knowing that paying the ransom is no _guarantee_ you actually recover, and that you have to then harden your system anyway after that (which you should have been doing all along)... I tend to agree.

I am a Baltimore resident, and I've also had that opinion since the day after the ransomware attack was announced -- just pay the ransom. (In part because from what I experience of Baltimore City government, i was pretty sure the backups and recovery/continuity plans were going to be basically nonexistent).

But on previous HN threads, it was a very unpopular opinion, in the articles right after the attacks were announced, very few in the comments threads seemed to think it was acceptable or a good idea to pay the ransom.

I'm curious if that remains true?


I work with a cryto exchange, and fwiw, can tell you that most businesses would agree with you. They pay the ransom. And they use common exchanges like Coinbase, Binance, and Bittrex to do so.

The only way not-paying the ransom is beneficial (aside from having proper backup systems in place obviously) is if there was a large public sentiment shift promoting NOT paying ransoms. If society at large came together and decided the majority of entities are NOT going to pay, then ransomware viruses would be less profitable and thus not as favorable projects for hackers/scammers.

But that would take a lot of effort, organization, and favorable circumstances to have everyone do that simultaneously going forward. And there would be casualties at the beginning before the public sentiment cemented itself in the collective conscious.

But yeah, the only way ransomware will stop is if it stops becoming profitable: which means companies either need to have proper OpSec and backups (so they have no need of paying), or collectively agree that no one will pay ransomware attacks.

Seems like a pipe dream that we'd ever get to that point though. So I imagine companies will continue to fork over the ransoms.

Edit: this got me thinking, say for example, the US government outlawed paying the ransom to hackers. And they could somehow enforce this law effectively. Wouldn't that pretty much stop ransomware attacks in the US? or where ever a law like that could be effectively enforced?


I think society has come together and decided we're all going to pretend we're not paying and talk publicly about how nobody should ever pay, while mostly everyone is paying.

Society also seems to have come together and decided we'd rather save money than spend it on effective backups and security.


Paying ransom makes it profitable. If you make something profitable there will be more of it as there is an incentive for individuals to continue those attacks.

Honestly lack of back ups is unacceptable. I can accept it if its a young start up handling some basic data and passing payments to an established business. However, any SE, DBA, IT director/admin with some experience should know about backups and push management for them. If management fails to pay for cost of backups, that individual should note it(preferably with a paper trail) and bring it up when shit hits the fan.

I will write SQL statements prone to injection, I will hack things together when I have to due to circumstances but I inform management of risks and they make the decision to proceed in a certain way knowing the risk. If they try to light my ass on fire 2 years from now, I will open the email and say "told you so". I have actually done that multiple times already and never got burned.


Welcome to Baltimore. Of COURSE not having backups is unacceptable. Other things that are unacceptable include it taking 3 days to get the city to _turn off the pipe_ in response to a report of a massive leak into a river causing wildlife die-off (https://www.baltimorebrew.com/2019/09/13/water-main-break-ca...), spending money to put in a bike lane then more money to take out a bike lane when nothing in particular has changed (at least TWICE https://baltimore.cbslocal.com/2019/04/29/roland-park-bike-l... https://www.baltimoresun.com/maryland/baltimore-city/bs-md-l...), having a Traffic Director vacancy for 5 years running (https://www.baltimorebrew.com/2019/09/19/a-transportation-de...), and nobody qualified doing anything about traffic light timing (https://www.baltimoresun.com/ask/bs-ask-traffic-lights-20191...), I could keep going.

Baltimore is basically a 'failed state' of municipal government.

While you can "blame the voters", being one myself I literally don't understand what the fuck is going on or what to do about it, even when we elect people who seem like they might do something different, it remains the same.

So... if individual people working in IT protected their own asses by documenting that someone told them to do the wrong thing... that's great for them, what it does for Baltimore is _nothing_. The IT director lost his job (probably? Maybe? So far just on unpaid leave?), which he thoroughly deserves, but that ALSO does nothing for Baltimore, cause people like this keep getting replaced, and it still doesn't get better.

Anyway, this actually doesn't have much to do with ransomware in general, but yeah.


> paying the ransom is no _guarantee_ you actually recover

I’ve only helped recover from ransomware once about 2 years ago. When we ran the decryption program provided after paying it decrypted ~97% of the data. Some files were just permanently corrupted. Windows Server with ECC memory fwiw.

Fun note but the owner wanted to reboot after decryption and I yelled “No!” across the room, wanting to clone the known good data first. Good thing too, a reboot started the encryption all over again… good times.

I heard they were hit again but took my advice and setup Backblaze.


It depends... if the decision makers knew that there weren't solid, tested backups, they should have paid, cut off all external access. and worked to lock things down.

Also, regarding backups... at least 3 physical devices and at least two distinct geographic locations. Have backup systems push to a drop location (temporary), where a pull system can then pull those backups to a separate location. This provides a separation between your internal systems and backup systems. Otherwise, in situations like this, the backups could have been compromised as well.

Aside: this is a perfect example of why government agencies should have policies in place for disclosure of security breaches after 60 days or so. This allows them a window to exploit as needed, while still preventing catastrophe in the case of too many previously unknown exploits going into the wild in a breach. While I'd prefer everything just responsibly disclosed to affected software, I understand that cyber warfare in state actors is a thing.


People have principles when it isn't their money. The best route would be to pay it, then divert the money and fix the issues. Think of the 76k as cheap penetration testing.


The problem with this thinking is:

1/ You have to trust the criminals are "good" people and actually took the time to write code to decrypt the data.

2/ Paying the ransom paints you an easy target for the next exploit.


No, because if it doesn't unlock, you're no worse off than you started.

It might paint you as an easy target, but that's why you increase your defenses. Basically, I'd rather roll the dice and see if the paltry sum gets me a fully working environment why I try to plan next steps.


Just my personal opinion, but I figure the logical response would be to audit 1) the state of their backups, and 2) the source of the malware. After confirming that all necessary data had indeed been backed up and could be restored, and after blocking the vector used to deliver the malware, they could refuse to pay the ransom and restore from backups. Then they could go about further hardening.

If it was discovered that the backups were faulty (as in this case), or if it were not possible to discover how the infection happened prior to the ransom deadline, I would have advised paying the ransom, since 76k is a relatively small amount to the city of Baltimore. (And then of course continuing to investigate and harden to avoid having to do so again.)


>I'm curious if that remains true?

It's complicated, like any scenario where there is individual benefit but social harm (with healthy mixes of fault and incompetence too). It's so obvious sometimes it can get lost, but the people behind the ransomware are criminals. Ransom paid to them is money that is in fact directly going to supporting criminals, the government and in turn taxpayers aiding them. At "best" they're very pure single type cybercriminals and it "merely" supports them in pushing more ransomware. More often criminals are into a diverse set of activities, ranging from other varieties of cybercrime to being part of a real world physical syndicate, in which case some of the ransom is going to an organization that does stuff like murder/assault/human trafficking/drugs/etc. The strong reactions against just paying the ransom, even if it's cheaper and a clear winner from the individual first order perspective of Baltimore City, stem from this and that hasn't changed. Even if it costs $18 million vs $76k, that is still $0 directly going to the criminals themselves. On some real level, if Baltimore just paid the ransom they would be attacking the rest of us for their own benefit following their own fuckup.

Sometimes there might be other mitigating factors in the intensity of people's reactions, but most of them don't apply here. This wasn't an innocent or complex mistake, it was utter, mind blowing incompetence at the most basic level. It wasn't a matter of economics either, or technological sophistication. They'd be immensely better off if they'd merely aimed at any basic network share and cycled a few USB hard drives into a safe deposit box twice a month. That'd still be embarrassingly primitive and ineffective by any remotely modern standard for any significant organizational entity, yet it'd be something. And since it's a democratically elected government entity the complaints about it falling on citizens fall a bit flat too, because citizens have a duty to be checking on their government in a democracy. And this has been a lesson that has helped push things forward elsewhere at long last, so some good came from it that wouldn't have if the ransom had been paid.

So basically yes, it remains true for a lot of us.


AFAIK everyone who has paid the ransom gets their data back. It's a pretty slick operation with decent support after the ransom is paid.


In addition to what other comments said about it making these attacks profitable, 76k is not nearly a big enough stupid tax for Baltimore.


Don't negotiate with terrorists.


You never know who's swimming naked until the tide goes out.

Would be great if every organization conducted targeted attacks in order to probe the reliability of their assumed safeguards. If your system hasn't been tested by someone who has a real incentive to break it, you have no idea if it is really as secure as you think it is.


We do this. Every place I have worked at I've had the senior admin explain and show the backup plans for servers and desktops and demonstrate various restore requests. I have caught a few with their pants down.

Another important thing is fine granular permissions on network shares.


We have our own pen testing team in addition to the 3rd parties who also do it. There’s a big DR test this weekend for our 2 largest data centers as well to make sure backups and fail overs happen. Plus multiple networks for different products that’s completely different from the regular employee network. You even have to sign in if you plug into our Ethernet.


Penetration tests are a huge deal right now. Many companies outsource it to security agencies who then perform the pen tests and report results.


For a while I worked with some products that were popular with municipalities, and school districts in a previous career. The scale of under funding, and incompetent leadership in those roles is staggering, and sadly not surprising to me.

Internal politics are always an issue of course, especially at schools where a handful of luddite teachers / administrators can kill good idea, but that's also an issue of IT leadership too.

I was glad to move away from those products.


I briefly worked for a school district that had 8000 Macs to support. They fired the entire technical team (for reasons unknown) except for me and then proceeded to hire people who had never used a Mac in their life. The woman in charge of the department was like Lorraine Bracco’s character in Hackers - extremely overpaid, could barely figure out how to work a computer, and used God as her password. I’m still disgusted when I think about how corrupt and incompetent that district was and likely still is. A few of the fired guys wound up with a nice settlement, I’m sure the webcam recordings of the superintendent exchanging teaching positions for “favors” probably worked to their advantage.


Reasons unknown? For starters, they were conspiring to covertly spy on their employer.


Nope, the district had no idea they were doing that until after the firing. I did hear the actual reason was because they were considering joining a union, but I didn’t stick around long enough to dig any deeper.


Conspiring criminals tend to do lots of conspiracy-ish and criminal-ly things. If they secretly installed webcams, they likely also were reading users' email and viewing confidential documents like performance reviews, budgets, salaries and student test scores. Their firing was likely highly justified.

Since you seem to know all kinds of facts that discredit the school district, the superintendent, and the IT supervisor, yet "didn't stick around" to absorb info that might look bad for the group of employees, makes you a narrator who is difficult to believe is being objective and non-biased.


If you don't test your backups, they're not backups. But I guess in this case if you don't have backups at all, you also don't have backups... 1 is 0, 2 is 1, etc...


I used to manage a set of tape backup robots (huge square tapes, I want to say 25 glorious GB a piece, and some Sony DAT devices @50-75GB a piece) that backed up sister data centers over a few DS-3s back in the early 2000s. The hardware/servers involved were around a 3/4 million.

I only provide that context because I've been as far away from that experience as I can possibly put myself and I can only hope it's gotten a lot better.

Prior to me and during the time that I managed backups, the statement "if you don't have backups at all, you don't have backups" was true for the entire backup set probably about once a week[0]. Those miserable robotic devices would crush tapes, the drives would eat tapes or otherwise fail, the backup servers would be over-whelmed (despite being nearly the largest hosts we had), the software would crash (Backup Exec), the server would just be ... skipped? And the tapes had a lifetime that was 1/10th what was advertised (and 1/2 what we had planned on[1]).

Our process included a monthly restore test of a few critical pieces of infrastructure and a rotating list of other hosts. I think the one time that we passed might have been worthy of opening a bottle of wine. Most of the time we could get enough data back that the restore wouldn't result in more than a rounding-error financial loss, but for the most part our server team spent a lot of time praying.

That system gave me nightmares for years after I left that position.

[0] It was so bad that our initial plan was to backup DC A using DC B and vice versa, but we opted to backup both from both data-centers (one of our requirements was off-site backups).

[1] I think I read the headline to an article a month ago about why "tape is still king" in backups. I couldn't bring myself to click-in. I assume things have gotten better, but if they're still selling this product the way they did back then, I'd want real-world numbers on MTBF and the like.


Worked in a helpdesk for a while, with a similar system, (this was after working support for iomega). I have no desire to work with anything like this ever again.

Systems push to drop location. Backup services pull from drop location(s). Redundant storage, etc. I don't trust "disposable" media at all anymore, and limited trust for hard storage (hdd/ssd/etc).

To me, if it's not on at least 3 devices at at least 2 locations, it's not properly backed up.


I would take that a step further and say, "If anything, including automation, can tamper with your backups, they are not backups"

Several companies have gone out of business this year due to getting their servers wiped by an attacker. Their backups were on live servers as well.


Every night we rebuild our dev and qa servers and databases from backups of production which were taken that same evening. If there is something wrong with the backups, the dev team will definitely know about it the next day when they try to start working and find that their development and qa systems are not working.


The insane thing is it's 2019 and this should be a baseline when dealing with your backups. It's shocking to know many people don't even do this.


Reminds me of a client's system I was on a few years ago.

They had a backup script on their server that was written some time in the 90's by the looks of it. Used ye old `cpio` command to write to a tape drive; someone's job was to manually go and eject the tape every morning and replace it.

For verification, the script simply checked the exit status of the cpio command, and if it returned 0, it wrote "BACKUP STATUS: SUCCESSFUL"-- or 'BACKUP STATUS: BAD" if non-zero--to a log and sent an email.

At some point we had to try to restore something, and spent several hours trying to figure out why their backup tape was empty. Some quick digging into the script discovered that the command would indeed write some kind of tarball to the backup tape, it was just zero bytes.

IIRC, I eventually traced it to some problem with the cpio command erroring out if it tried to copy a file larger than 4GB. Suffice to say, the client was a little shaken that they had gone god-knows-how-many-months without an actual backup.


Question:

OK, so I know how to test — manually — that a few randomly chosen files are correctly backed up in my backup systems.

But what if there are larger classes of systemic error in the process I haven't thought of testing? What if some particular file type or directory tree has vanished from the backups, but since it isn't in my manual testing process, I never catch it? Are there best practices for validating that whole directory trees are correctly backed up across the board? Or any form of automated testing for backups (but, presumably, separate from the software that does the backup process)?


For key servers, we've been known to clone the entire drive and put the clone in service.

This helped us with several problems:

Backup system didn't record the permissions.

Backup didn't get 'resource forks' or 'alternate data streams'

Backup wasn't complete enough to satisfy copy protection for software that was no longer supported.

Backup process didn't see deep into the folder because there was a hardlink/symlink loop that caused it to stop

Backup didn't get a coherent copy of a bunch of files that was acting like a database

Drive had a CRC error that let it keep running but stopped the backup system from reading the file

Windows program stored vale in the case of the file name while the Unix backup command assumed it could squish the Windows files to lowercase.


I would just use ZFS snapshots and replication. They're read-only, and they only use as much space as the difference between the current state and the snapshot.

Plus, you'll have solid data integrity instead of relying on buggy firmware in your RAID controller.


And there's no way to compress/eliminate snapshots?

Also, redundancy and snapshots are not a backup.


Doing some DR testing is the only way to truly know


This reminds me of the Gitlab outage when they realized their backup images existed but were size 0. If you aren't regularly restoring backups to a machine, you are setting up for failure.


I have to wonder if, in macro economic terms, we'd be better off spending whatever is necessary to successfully investigate and prosecute scammers/telemarketers/ransomers.


Some of that is going on. I've heard of some orgs being asked to pay a ransom in hopes of picking up a trail to the perpetrators following the BitCoin.

https://www.fbi.gov/news/stories/ransomware-abettor-sentence...


We are best off not paying them regardless: if there is no payout the bad actors will give up the attack. If you are in a group that pays out target them. It doesn't pay to target someone who has good backups (though it might not be worth checking if infecting everyone is easy)


The attack is inexpensive enough that even if 0.001% pay it's probably still worth doing.

Even if you criminalize paying you'll still have a tiny portion that do.


For takeovers of larger systems, probably not. They're clearly finding vulnerabilities/targets, doing some minimal investigation to set a price to charge them, and some other basic effort. Doing it 10000 times for a single payout probably wouldn't be worth it.


1 / .001% = 100,000


I mentioned a lower value to indicate it definitely wasn't worth it.


>if there is no payout the bad actors will give up the attack

I'd argue that isn't true. Ransomware is a form of terror in the form of "losing everything". And that threat of impending loss combined with 'backups are hard wahhhh' and the fact that no Corp I know backs up user machines...

As a numbers game, it pays. Well. And even if it didn't, some still hack for the "lulz".


>>no Corp I know backs up user machines

Most of the orgs I've worked at (for at least the last decade) map the local profile to a network share, which is definitely backed up. The only things not backed up would be saved outside that folder structure, but the organizations have always been clear about that.


Definitely. Look at all the people pursuing "make money fast" scams. Some people will happily do anything they think might make them a buck. Not paying out will help, of course, but it doesn't eliminate the problem.


When there is no money in it nobody will be developing the exploits that make it easy. There is a large difference between buying a recent vulnerability and exploiting it vs finding a vulnerability.


One problem with that... international borders.


The irony in this is absolutely hilarious. I remember we were all praising the city for having such forethought and for doing their due diligence (unlike most of these cases) and keeping backups of their data.


I wonder how the IT director was hired and who monitored what the department was doing. I worked at a place that hired a MS system support person but no on one knew anything about it and he seemed to know more than they did. He was fired a couple months later as he actually knew nothing and spent all his time making support calls trying to get someone to tell him what to do. He was on the phone for hours every day. Then when he was fired no one changed passwords so he logged into the DNS server and randomized all the IPs, leaving the whole company without networking. Hopefully not the same person...


https://www.linkedin.com/in/frank-johnson is the guy. Typical suit found in many exec positions, looks perfectly legit, fine credentials, a little heavy on the sales stuff.

CIO for an acutely underfunded department in large but very poor city is not an easy job.


From the profile ---- About I am a visionary leader that quickly grasps near-term initiatives and opportunities while foreseeing long-term opportunities and how to achieve them. I bring a unique blend of expertise in building and developing winning teams as well as building and scaling businesses. ... I have strategic technology discussions with the public and private organizations in the Baltimore City area every day. Put simply, I assist the Mayor discover how technology can deliver value and competitive advantage to Baltimore City and transform the lives of people that live, work and visit Baltimore City. -----

Try as I might, I would never be able to come up with something like this, even if you held a gun to my head. There surely must be a small industry of HR consultants who specialize in writing this kind of drivel


Oh, I agree it’s drivel, and it makes a typical HN reader cringe but this is fairly common language in executive bios.

Taking a career-ending bullet like this is Part of what these c-level people sign up for, an occupational hazard, especially In Baltimore


Baltimore has been through a good amount of IT Directors who were caught up in scandals/corruption. Not surprising as in the past ten years we have had two disgraced/corrupt mayors abruptly leave office.


IT Directors? Many of Baltimores mayors, councilpersons, police chiefs, police officers and detectives, public works heads, etc have been scandalous, crooked, incompetent, etc in recent years. Its the norm, not the exception. Its a crooked town.


As for corrupt IT Directors read here

https://statescoop.com/4th-cio-leaves-baltimore-within-five-...

It is a corrupt town the facts/history clearly show this. It's where I was grew up/live/work; all within in surrounding counties, which is solid living! Though Baltimore isn't the only corrupt US city!


What is the end state for Baltimore?


as a city? probably status quo: continued grift and unaccountability as a once proud manufacturing town decays.

it's somewhat attractive price wise simply because it's a very affordable urban east coast city. there's a vibrant/edgy art scene (to include drama, etc.) that's attractive to younger artists, sort of like what detroit is going through (as i understand it)


Baltimore county, Howard, Harford and everywhere not Baltimore city is good living. High paying IT work is readily available and you can work at many well known Govt agencies. Cost of living isn't too bad as well you can get nice inexpensive new single homes near the MD/PA line and commute in (30 to 40 minute commute).


Twice I've been at companies where we lost some storage due to a hardware failure and it turns out the backups were missing or didn't work. And both times, nobody was fired.


In fact, if you haven't tried to recover from a backup, it probably doesn't work. Like most recovery/data integrity hardware/software, if it isn't being (regularly) tested, it probably is misconfigured, not running or broken. Like all software.

I've seen many PC-raid hardware solutions that simply didn't work. A drive fails, and the machine becomes unusable, fail to recover when a good drive is inserted, or crashes/hangs and the data is lost.

Yet people keep buying raid solutions for PCs. I recommend: pull a drive from your raid hardware and see what happens. If you're afraid to do that, then you need a different solution.


I used to work at a RAID hardware company (since bought by SUN, now Oracle though I have no idea if that model is still sold). Salesmen were told to demonstrate the system by pulling a random drive from the production disk and put it on top. 15 minutes latter someone from support showed up with a new disk (not IT, support), the salesmen made sure they were still next to the machine to show off that support knew about the failure and could replace it quickly.

If you don't have the same confidence in your systems you need to fix that.


Heh. I visited a colleague and they had a fancy new netapp and he demonstrated pulling a drive, status light change and notification, reinserting it, status light change and notification.

The next morning he got a fedex from netapp... a replacement drive.


What kind of protection layer was in place? I've been using ZFS for years and this type of functionality is standard.


This was hardware raid. There were two parity disks per stripe as I recall. This was about 20 years ago and I didn't work on that system: my memory of the details is probably wrong.


Thanks!


Now at my 3-person startup, we test our backups once a week! Really.

I once took over management of a startup that was just acquired (this was back in 1999). The people there never did backups. Ever. They told me with a straight face: "We have a RAID. It doesn't need to be backed up."


Hardware RAID is a declaration of war on humanity by firmware. Use ZFS!


Small chance, but, is it the same person? https://www.linkedin.com/in/frank-johnson


Yeah that's him, a different article mentioned he was a Sales VP at Intel


I would take the $18 million "cost" with a grain of salt. It's not easy to come up with an estimate for the actual cost of lost files and delayed revenue.

It could have been some trigger-happy accountant just tallying up stuff left and right.

Some organizations need a disaster to provoke change. Hopefully, they'll do the right thing now and transition to something that works.


>>The person in charge of the city's systems was Frank Johnson, who went on leave (presumably permanently) after a post-attack audit found the IT director hadn't done much IT directing.

love this writing man :) :)

Dont know why would a city's chief digital officer go backup free?!!! even at worst , backblaze would have helped :|


I guess a failure to do backups of any kind is beyond any solution we could offer, but if one could take even a single step forward beyond nothing, a read-only (immutable) offsite destination is well worth your consideration.[1]

Most end users looking for "ransomware protection" probably just want to drag and drop some files, like Dropbox, which is why a simple SFTP (filezilla) solution is nice, but of course you could point any old thing[2] if you were more sophisticated ...

[1] https://www.rsync.net/products/ransomware.html

[2] borg, restic, rclone, git-annex ... rsync ...


Surprised someone would use or recommend FileZilla after their numerous adware issues.


I'm just thinking of simple SFTP clients ... WinSCP ? psftp.exe ?

Whatever works for you.


To be fair, the CIO was only on the job for 1.5 years when the ransomware hit. It's not known whether or not he had the budget for reorganizing all of the data, etc. Maybe it was on his list of priorities, but it didn't have the funding, etc, and it was pushed off until the next year. I'm not absolving him of this because backups are a first order object when you're dealing with IT, but it could be part of the reason why it wasn't implemented yet. This is the reality of working in a bureaucracy like city government.


How do we increase IT competence in the public sector? There's no reason important servers should be running on Windows machines with "backups" consisting of making a copy of a file on the same hard drive as the original.


You can't do anything but let it publicly fail. These incompetencies are usually the result of the top not believing that the cost is worth it. Also a well run IT department often gets "nothing's ever broken, what do they actually do?" so you get rewarded for doing break/fix.


Reminds me of this quote in the preface of the art of war edition I have on my bookshelf -- I find it to be a fantastic little story:

According to an ancient Chinese parable, a lord once asked his physician – whose two brothers were also healers – which of the three was the most skilled. This doctor was renowned for his expertise and ability in healing throughout China, and he replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house. My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood. As for me, I puncture veins, prescribe potions and massage skin, so my name gets out and is heard among the lords.


I am an infosec professional who pitches utilities and public government. This is spot on.

The second call back is always more expensive for them, but sometimes someone needs to touch the stove themselves to see it’s hot.

If you genuinely can’t afford it, I’ll help for free, but there’s only so much time in the day and I won’t eat the cost of stupid or politics.


Is it not also because the government contracting system is FUBAR? I.e. only certain kinds of companies can contract on the big projects, you need certain clearances for certain kinds of works, and the pay gap between public and private sector SWE & security?

Also aren't their some pretty tough mandates like x% of companies must be minority owned etc. I imagine it allows those companies to charge extortionist rates because they hit the relevant quotas and nobody else does.


My first job out of college was working for a IT consulting firm on a project for a state government. All of this describes that experience precisely.

> Also aren't their some pretty tough mandates like x% of companies must be minority owned etc. I imagine it allows those companies to charge extortionist rates because they hit the relevant quotas and nobody else does.

One thing I saw was that the big players will have employees who fit the desired characteristics, and they just spin those employees off into their own corporation as needed.


Many of these complaints would apply to physical safety too. But we don't just wait for buildings to burn down to teach organizations a lesson about their heating contractors -- we have fire codes, and inspectors with teeth.

Is there any chance that something like this might be done for IT? Or is it all too young to be done sensibly?


>But we don't just wait for buildings to burn down to teach organizations a lesson about their heating contractors -- we have fire codes, and inspectors with teeth.

We did wait for a fire to create building codes, it just was a long time before either of us were born: https://en.m.wikipedia.org/wiki/Triangle_Shirtwaist_Factory_...

There was certainly plenty of fires before that but this one was terrible enough to cause a public outcry.


Sure, I agree that fire rules are written in blood. But not the blood of every single district. Once we've decided, we impose these rules across many cities, millions of buildings.

An individual city council is not free to learn the lessons again the hard way, no matter how tight the budget and how close the elections. Either it meets the code, or it gets closed down.


I worked in local govt as a consultant, then as their IT Director for a few years. Nowhere near as big of a place as Baltimore though.

The hardest thing for me to do was convince anyone to get me money. They STILL have servers running that were purchased in 2007-2008. (I moved all critical services off, and made damn sure backups worked). It took two years before I was able to convince the board to give me $5000 for server hardware alone. That went towards the most needed hardware too, not my wish list.

This was before I was able to get things set up so I was able to buy something under $500 without board approval, just needed IT Committee approval. The board required everyone in the county to go through the IT Committee for any purchase related to IT. Printer ink, a $10 wireless mouse, etc. That was the first thing I got rid of while I was there. They only reason they did that was because it gave them power over people. I've seen them turn down people over $15 purchases...

The other person who replied is pretty correct about politics. The board there HATED tech for no reason other than if you weren't in the "old boys club" you were garbage to them and they treated you like that.

So overall, it really gave me a great view on why so many counties are so dysfunctional and why government works so slow (for most things at least). I don't regret my time there, but I wish it didn't take such a toll on me.

/Edit Also, there's no reason that they couldn't have tested backups. I was the only IT person for a county of 350+ PCs and a hospital, and I still managed to test backups every month. Shadow Copies saved me so much time too. I also used linux /pfSense when applicable.


Let me put my cynical hat on: legalese ransomware. Codify in law what is the fair price and a whole cottage industry will spring up to do the education for you.


Time for me to figure out if I can register a corporation under the name Thieves Guild...


Consolidate with counties (for larger/populated counties) or the State. Bigger public sector entities are as effective as a comparable corporation.

Municipal government doesn't have a sustainable funding model for things like IT unless they are really small or really big. In my city (~100k people), they have 3 guys and a couple of freelancers that work <100 hours a year. The staff make like $40-55k/year.


Apply for a job. Run towards the fire. Help fix it.


I applied for the Baltimore City director of digital operations position. I interviewed with Frank Johnson and a variety of other city department heads.

I got the impression they were a group of people that broadly understood their problems and were finding it very difficult to steer the city towards good solutions.

Although Mr. Johnson was the City's Director of IT, realistically he had very nominal oversight over many of the city's actual IT departments - which were spread across a series of departments with their own employees, budgets, and resources. This was something he had been actively trying to improve but with limited progress.

I don't know what the best solution is - their hiring process is flawed and it's difficult to remove or replace problem employees. The budget is about 60% of what it probably needs to be and there's no path towards improving it.


That sounds right. Good for you for going out for it. Hiring / firing in government is certainly not helping the situation, and budget fights are always a struggle.


All the worst jobs I've ever had shared one common characteristic.

Knowing exactly how to fix a problem, having the skills to do so, and being absolutely forbidden from actually doing it.

This sounds like one of those "opportunities".


I would argue in this case that there is no "exactly how to fix it" when it comes to public sector technology, that said - you're absolutely right. Organizations often want the competence and then align things against competent people working to accomplish that goal. In order to solve the problem effectively, you've got to be an organizational hacker with some patience as well as a technologist - which requires a slightly different skillset than solving the easier engineering problem.

All that said, laments about "how do we fix government X" will never be solved if we deem the cost of entry too high to even attempt to fix it ourselves. Obviously this is all in addition to and outside of any democratic process that would lead to reform of how these positions were managed, performed, etc.


So I'm a software engineer in the private sector. What would be the path for me to take if I wanted to become an IT director for a local government?

It's actually not something that would be my first choice, but I do personally know people who have been harmed by incompetent/apathetic IT leadership in local governments. It angers me enough that I would consider a career change.


Stop these positions from being political hires. The IT head had a political consultant, and was the highest paid person in the city government. He was hired by the former mayor who is now under FBI investigation.


Increase pay and reduce hiring friction. Most people with good skills aren't willing to do the long waiting game or take the pay cut to work in the public sector directly.


First you'd have to solve politics. Good luck with that.


I am not sure it's just IT competence. If this was the worst thing going on in Baltimore it would be doing much better as a city, but its not.


"That can't be real?"

Really? I'll never forget the description given to me by a close friend who had left a "Government IT" job for the private sector: (1) You have standards and practices like everywhere, but forget one and you're explaining yourself to a judge instead of a boss and (2) there's never enough money for doing all of the things required to meet regulations let alone make things better which is why you'll see silly things you haven't seen on banking websites in a decade still prominent as "security features" on state treasury websites.

I found the story about "important data on a desktop drive", and the shock it caused, surprising. Maybe my past life (a decade ago in infrastructure) was unique, but I specifically recall an incident where I was called down to make an old IBM NetVista (mind you, Lenovo had owned that line for a while at this point) boot up[0]. I noticed some numbers on a printed label and a boot error about the CMOS battery, realized it was drive telemetry, realized that nobody who was looking at the problem had ever heard of plugging in those values (or probably had touched an IDE controller -- server guys -- and it was ancient technology).

The rest of the story I might not have completely correct as parts of it are assembled third-hand, but this desktop was located in our data center, hooked up to a modem (2400bps) and it handled submitting charges to another carrier to the tune of "a few layoffs" for every week it wasn't functional.

How does this happen? Well, the company went bankrupt and emerged, then was purchased by another company. During that time, a large part of our operations was moved from one state to another, hardware and all (but mostly not the people). This predates all of that, of course. At some point, a NetVista was put in place to test setting up an automated process for billing this carrier -- something carrier imposed (must have been one of the big guys). The developer who set up the test system was successful ... on his final week of work before being laid off. A few months later, the carrier continued working the migration plan and switched things over, and after a short delay, accounting rang the alarm bells. A busy developer stepped in, found the offending system was connected to test and re-configured it to point to prod. Everyone went on their day. And hey, when desktop migrated everyone to Windows XP, they put UPSes at every desk so it literally ran in a cubicle until the Data Center migration (where it failed the first time and the label was printed). Rather than figuring out what, on earth, it would take to fix it, they put it on a shelf and plugged it into the UPS in the rack until I was called several years later.

[0] I was one of two people who were called when everything else was tried. This sort of incident happened to me in very similar ways at least 4 times (once with an old Thinkpad Laptop).


It makes perfect sense!

The same kind of people that would say no to a pocket change ransom, are the same kind of people that would use taxpayer money to not just do a poor job, but to not do it at all!

The kind of people whose shit don't stink.


Never pay a ransom: it just encourages the bad guys to try harder. Soon they will figure out how to corrupt your backups as well (if there is anyway to do that - now we are in an arms race). Better to write off the loss now and ensure the bad guys don't continue to think of new attacks.

There is one exception: if the ransom is paid in such a way that the FBI (or equivalent) can track where the money goes and thus arrest the criminals.


Well when the FYI recommends paying what options do you have?

https://www.nytimes.com/2019/08/14/opinion/ransomware.html


TLDR; if your company / city has millions invested in a cyber infrastructure get cyber insurance. :facepalm:

Also $76,000 is oddly low... like low enough to be the price of a well established firm to do a pen test on your network...


True, but managers also usually fail to account for the time and cost of remediation after a pentest.

Had the city engaged in the pentest, would there have been an appetite to spend 10-20x that amount on remediation?


Paying the ransom doesn't cover the need to spend money on fixing the problems. Having a ransom payer mark on the organization raises the security requirement, doesn't it?


For personal ransomware, the requests are average around $250, lower than you'd think.


This headline gives me anxiety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: