I suspect a lot of folks never worked with Enterprise / Gov customers and don't understand the restrictions and compliance requirements (like data residence, access control, FedRAMP, TISAX, reliability SLAs, etc. which you get with Azure but not with some "move fast and break things" startup like OpenAI).
My comment is not dissing on the author, I'm just pointing out what most folks get from Azure is compliance (and maybe safety), and an OSS cannot solve that (unless they're running some other models in owned infrastructure or on-prem).
Even beyond enterprise thought people seem to think that the only thing their data is good for as far as OpenAI is concerned is for training-datasets, but that's just not the case.
The authors of this have built something great, but it doesn't protect you from any of those non-training use-cases.
When people say stuff like this on Hacker News, it makes me think even more they haven't done a lot of work with government, or at least not the parts of the government I'm familiar with. Obviously, there are a lot of governments out there. But the FedRamp private enclaves with IL5-certification for CUI handling offered by the major cloud providers are a hell of a lot more secure than OpenAI's servers, and for workloads that require it, the classified enclaves are probably close to impossible to breach if you're not Mossad. Data centers on military installations, no connection to the Internet, private DX hardware encrypted on the installation with point-to-point tunneling through national fiber backbone only, and if you get anywhere near the cables, men in black SUVs suddenly show up out of of nowhere to bring you in and figure out why. I'm not even just saying that as a hypothetical. I've literally seen it happen when AT&T dug too close to the wrong line they didn't even know about because it was used for a testing facility the Navy doesn't publicly acknowledge. And the data they really cared about didn't even use that. It was hand-carried by armed couriers who kept hard drives in Pelican cases.
They may be tedious as fuck to implement and make what should be simple work take forever, but there are plenty of compliance checklists out there that really do give you security.
It potentially plugs the contractual and liability risks, which might be more important (talk to your legal and compliance folks). None of your data is going to launch nuclear missiles, if it leaks it would be unfortunate, but not as much as the litigation and regulatory costs you could potentially incur.
Everyone gets popped eventually. It's your job to show you operated from a commercially reasonable security posture (and potentially your third party dependency graph, depending on regulatory and cyber insurance requirements).
(i report to a CISO, and we report to a board, thoughts and opinions are my own)
Word of mouth referral into the org, last ~5 years as a security architect/cybersecurity subject matter expert, before that DevOps/infra engineer. 20+ years in tech. I rely solely on network and reputation.
Be interesting to people who can provide you opportunity, and ask whenever an opportunity presents itself. If you don’t ask, the answer is default no. Being genuinely curious and desiring to help doesn't hurt either.
Compliance isn’t about preventing problems. It’s about identifying risks and determining who is responsible for mitigating those risks and who is on the hook for damages if the risks aren’t sufficiently mitigated.
My comment is not dissing on the author, I'm just pointing out what most folks get from Azure is compliance (and maybe safety), and an OSS cannot solve that (unless they're running some other models in owned infrastructure or on-prem).