Hacker News new | past | comments | ask | show | jobs | submit | lima's comments login

Many teams use topics for this.


Yeah, but you can't really discuss the topic itself, right?

I do think this is a weakness of Gerrit. It doesn't really capture "big picture" stuff nearly so well. At least on GH you can read the top-level comment, which is independent of the commits inside it. Most of the time I was deep in Gerrit doing review or writing patches, it was because the architectural decisions had already been made elsewhere.

I guess it's one of the tradeoffs to Gerrit only being a code review tool. Phabricator also didn't suffer from this so much because you could just create a ticket to discuss things in the exact same space. Gerrit is amazingly extensible though so plugging this in is definitely possible, at least.


On a mailing list, you used to be able to write up the big picture in the "cover letter" (patch#0). Design-level discussions would generally occur in a subthread of patch#0. Also, once the patch set was fully reviewed, the maintainer could choose to apply the branches on a side branch at first (assuming the series was originally posted with proper "--base" information), then merge said side branch into master at once. This would preserve proper development history, plus the merge commit provides space for capturing the big picture language from the cover letter in the git commit log.



Could just steal the key at that point.


Only if the key is generated off-key


I meant to imply that with this statement:

> and you have a backup of the key


The hard part is not backing yourself into a corner with the feature tree over time. Takes a lot of practice to avoid, and that's something AI could help with. Meanwhile, what their demo does is trivial.


Agreed, sync modelling in Solid Edge and NX can be really powerful, especially when requirements change unexpectedly.


This is data that criminals already publicly released.


If criminals already sell drugs publicly, and you go obtain those drugs and give them away to other members of the public, you will be in trouble. I don't think this is too difficult of a concept to grasp.


Intent matters.


The RADOS K/V store is pretty close. Ceph is built on top of it but you can also use it as a standalone database.


Nothing content-addressed in RADOS. It's just a key-value store with more powerful operations that get/put, and more in the strong consensus camp than the parents' request for coordination free things.

(Disclaimer: ex-Ceph employee.)


Can you point me towards resources that help me understand the trade offs being implied here? I feel like there is a ton of knowledge behind your statement that flies right past me because I don’t know the background behind why the things you are saying are important.


It's a huge field, basically distributed computing, burdened here with the glorious purpose of durable data storage. Any introductory text long enough becomes essentially a university-level computer science course.

RADOS is the underlying storage protocol used by Ceph (https://ceph.com/). Ceph is a distributed POSIX-compliant (very few exceptions) filesystem project that along the way implemented simpler things such as block devices for virtual machines and S3-compatible object storage. Clients send read/write/arbitrary-operation commands to OSDs (the storage servers), which deal internally with consistency, replication, recovery from data loss, and so on. Replication is usually leader and two followers. A write is only acknowledged after the OSD can guarantee that all later reads -- including ones sent to replicas -- will see the write. You can implement a filesystem or network block device on top of that, run a database on it, and not suffer data loss. But every write needs to be communicated to replicas, replica crashes need to be resolved quickly to be able to continue accepting writes (to maintain the strong consistency requirement), and so on.

On the other end of the spectrum, we have Cassandra. Cassandra is roughly a key-value store where the value consists of named cells, think SQL table columns. Concurrent writes to the same cell are resolved by Last Write Wins (LWW) (by timestamp, ties resolved by comparing values). Writes going to different servers act as concurrent writes, even if there were hours or days between them -- they are only resolved when the two servers manage to gossip about the state of their data, at which time both servers storing that key choose the same LWW winner.

In Cassandra, consistency is a caller-chosen quantity, from weak to durable-for-write-once to okay. (They added stronger consistency models in their later years, but I don't know much about them so I'm ignoring them here.) A writer can say "as long as my write succeeds at one server, I'm good" which means readers talking to a different server might not see it for a while. A writer can say "my write needs to succeed at majority of live servers", and then if a reader requires the same "quorum", we have a guarantee that the write wasn't lost due to a malfunction. It's still LWW, so the data can be overwritten by someone else without noticing. You couldn't implement a reliable "read, increment, write" counter directly on top of this level of consistency. (But once again, they added some sort of transactions later.)

The grandparent was asking for content-addressed storage enabling a coordination-free data store. So something more along the lines of Cassandra than RADOS.

Content-addressed means that e.g. you can only "Hello, world" under the key SHA256("Hello, world"). Generally, that means you need to store that hash somewhere, to ever see your data again. Doing this essentially removes the LWW overwrite problem -- assuming no hash collisions, only "Hello, world" can ever be stored at that key.

I have a pet project implementing content-addressed convergent encryption to an S3 backend, using symlinks in a git repo as the place to store the hashes, at https://github.com/bazil/plop -- it's woefully underdocumented but basically a simpler rewrite of the core of https://bazil.org/ which got stuck in CRDT merge hell. What that basically gets me is that e.g. ~/photos is a git repo with symlinks to a FUSE filesystem that manifests the contents on demand from S3-compatible storage. It can use multiple S3 backends, though active replication is not implemented (it'll just try until a write succeeds somewhere; reads are tried wider and wider until they succeed; you can prioritize specific backends to e.g. read/write nearby first and over the internet only when needed). Plop is basically a coordination-free content-addressed store, with convergent encryption. If you set up a background job to replicate between the S3 backends, it's quite reliable. (I'm intentionally allowing a window of only-one-replica-has-the-data, to keep things simpler.)

Here's some of the more industry-oriented writings from my bookmarks. As I said, it really is a university course (or three, or a PhD)..

https://www.the-paper-trail.org/page/cap-faq/

https://codahale.com/you-cant-sacrifice-partition-tolerance/

https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...


I upvoted this but I also wanted to say as well that this summary is valuable for me to gain a better groundwork for an undoubtedly complex topic. Thank you for the additional context.


Thank you for this, I learned a lot from this comment.


Lots and lots of heavy machinery uses Windows computers even for local control panels.


But why does it need to be remotely updated? Have there been major innovations in lift technology recently? They still just go up and down, right?

Once such a system is deployed why would it ever need to be updated?


They're probably deployed to a virtualized system to easy with maintenance and upkeep.

Updates are partially necessary to ensure you don't end up completely unsupported in the future.

It's been a long time, but I worked IT for an auto supplier. Literally nothing was worse than some old computer crapping out with an old version of Windows and a proprietary driver. Mind you, these weren't mission critical systems, but they did disrupt people's workflows while we were fixing the systems. Think, things like digital measurements or barcode scanners. Everything can be easily done by hand but it's a massive pain.

Most of these systems end up migrated to a local data center than deployed via a thin client. Far easier to maintain and fix than some box that's been sitting in the corner of a shop collecting dust for 15 years.


Ok but it’s a LIFT. How is Windows even involved? Is it part of the controls?


Real problem is not that it's just a damn lift and shouldn't need full Windows. It's that something as theoretically solved and done problem as an operating system is not practically so.

An Internet of Lift can be done with <32MB of RAM and <500MHz single core CPU. Instead they(for whoever they) put a GLaDOS-class supercomputer for it. That's the absurdity.


An Internet of Lift can be done with <32KB of RAM and <500KHz single core CPU.


You’d be surprised at how entrenched Windows is in the machine automation industry. There are entire control systems algo implemented and run in realtime Windows, vendors like Beckhoff and ACS only have Windows build for their control software which developers extend and build on top with Visual Studio.


Absolutely correct, I've seen muli-axis machine tools that couldn't even be started let alone get running properly if Windows wouldn't start.

Incidentally, on more than one occasion I've not been able to use one of the nearby automatic tellers because of a Windows crash.


Siemens is also very much in on this. Up to about the 90s most of these vendors were running stuff on proprietary software stacks running on proprietary hardware networked using proprietary networks and protocols (an example for a fully proprietary stack like this would be Teleperm). Then in the 90s everyone left their proprietary systems behind and moved to Windows NT. All of these applications are truly "Windows-native" in the sense that their architecture is directly built on all the Windows components. Pretty much impossible to port, I'd wager.


Example of patent: https://patents.google.com/patent/US6983196B2/en

So for maintenance and fault indications. Probably saves some time from someone digging up manuals for checking error codes from where ever they maybe placed or not. Also could display things like height and weight.


Perhaps "Windows Embedded" is involved somewhere in the control loop, it is a huge industry but not that well-known to the public;

https://en.wikipedia.org/wiki/Windows_Embedded_Industry

https://en.wikipedia.org/wiki/Windows_IoT


We do ATM's - it runs on Windows IOT - before that it was OS/2.


Any info on whether this Crowdstrike Falcon crap is used here?


Fortunately for us not at all although we use it on our desktops - my work laptop had a BSOD on Friday morning, but it recovered.


According to reports the ATMs of some banks also showed the BSOD which surprised me; i wouldn't have thought such "embedded" devices needed any type of "third-party online updates".


Security for a device that can issue cash is kind of important.


Its easier and cheaper (and a lil safer) to run wires to the up\down control lever and have those actuate a valve somewhere, than it is to run hydraulic hoses to a lever like in lifts of old, for example.

That said it could also be run by whatever the equivalent of "PLC on an 8bit Microcontroller" is, and not some full embedded Windows system with live online virus protection so yeah, what the hell.


Probably for things like this - https://www.kone.co.uk/new-buildings/advanced-people-flow-so...

There’s a lot of value on Internet of Things everything, but comes with own risks.


I'm having a hard time picturing a multi-story diesel repair shop. Maybe a few floors in a dense area but not so high that a lack of elevators would be show stopping. So I interpret "lift" as the machinery used to raise equipment off the ground for maintenance.


Several elevator controllers automatically switch to the safe mode if they detect a fire or security alarm (which apparently is also happening).


The most basic example is duty cycle monitoring and trouble shooting. You can also do things like digital lock-outs on lifts that need maintenance.

While the lift might not need a dedicated computer, they might be used in an integrated environment. You kick off the alignment or a calibration procedure from the same place that you operate the lift.


how many lifts, and how many floors, with how many people are you imagining? Yes, there's a dumb simple case where there's no need for a computer with an OS, but after the umpteenth car with umpteen floors, when would you put in a computer?

and then there's authentication. how do you want key cards which say who's allowed to use the lift to work without some sort of database which implies some sort of computer with an operating system?


It's a diesel repair shop, not an office building. I'm interpreting "lift" as a device for lifting a vehicle off the ground, not an elevator for getting people to the 12th floor.


> But why does it need to be remotely updated?

Because it can be remotely updated by attackers.


Security patches, assuming it has some network access.


Why would a lift have network access?


Do you see a lot of people driving around applying software updates with diskettes like in the old days?

Have we learned nothing from how the uranium enrichment machines were hacked in Iran? Or how attackers routinely move laterally across the network?

Everything is connected these days. For really good reasons.


Your understanding of stuxnet is flawed, Iran was attacked by the Us Gov in a very very specific spearfish attack with years of preparation to get Stux into the enrichment facilities - nothing to do with lifts connected to the network.

Also the facility was air-gapped, so it wasn't connected to ANY outside network. They had to use other means to get Stux on those computers and then used something like 7 zero days to move from windows into Siemens computers to inflict damage.

Stux got out potentially because someone brought their laptop to work, the malware got into said laptop and moved outside the airgap from a different network.


"Stux got out potentially because someone brought their laptop to work, the malware got into said laptop and moved outside the airgap from a different network."

The lesson here is that even in an air-gapped system the infrastructure should be as proprietary as is possible. If, by design, domestic Windows PCs or USB thumb drives could not interface with any part of the air-gapped system because (a) both hardwares were incompatible at say OSI levels 1, 2 & 3; and (b) software was in every aspect incompatible with respect to their APIs then it wouldn't really matter if by some surreptitious means these commonly-used products entered the plant. Essentially, it would be almost impossible† to get the Trojan onto the plant's hardware.

That said, that requires a lot of extra work. By excluding subsystems and components that are readily available in the external/commercial world means a considerable amount of extra design overhead which would both slow down a project's completion and substantially increase its cost.

What I'm saying is obvious, and no doubt noted by those who've similar intentions to the Iranians. I'd also suggest that the use of individual controllers etc. such as the Siemens ones used by Iran either wouldn't be used or they'd need to be modified from standard both in hardware and with the firmware (hardware mods would further bootstrap protection if an infiltrator knew the firmware had been altered and found a means of restoring the default factory version).

Unfortunately, what Stuxnet has done is to provide an excellent blueprint of how to make enrichment (or any other such) plants (chemical, biological, etc.) essentially impenetrable.

† Of course, that doesn't stop or preclude an insider/spy bypassing such protections. Building in tamper resistance and detection to counter this threat would also add another layer of cost and increase the time needed to get the plant up and running. That of itself could act as a deterrent, but I'd add that in war that doesn't account for much, take Bletchley and Manhattan where money was no object.


I once engineered a highly secure system that used (shielded) audio cables and amodem as the sole pathway to bridge the airgap. Obscure enough for ya?

Transmitted data was hashed on either side, and manually compared. Except for very rare binary updates, the data in/out mostly consisted of text chunks that were small enough to sanity-check by hand inside the gapped environment.


Stux also taught other government actors what's possible with a few zero days strung together, effectively starting the cyberwasr we've been in for years.

Nothing is impenetrable.


You picked a really odd day and thread to say that everything is connected for really good reasons.


Or being online in the first place. Sounds like an unnecessary risk.


Remember those good old fashioned windows that you could roll down manually after driving into a lake?

Yeah, can’t do it now: it’s all electronic.


I’m sure that lifts have been electronically controlled for decades. But why is Windows (the operating system) involved?


but why do they have CS on them? they should be simply not connected to any kinds of networks.

and if there's some sensor network in the building that should be completely separate from the actual machine controls.


Compliance.

To work with various private data, you need to be accredited and that means an audit to prove you are in compliance with whatever standard you are aspiring to. CS is part of that compliance process.


Which private data would a computer need to operate a lift?


Another department in the corporation is probably accessing PII, so corporate IT installed the security software on every Windows PC. Special cases cost money to manage, so centrally managed PCs are all treated the same.


Anything that touches other systems is a risk and needs to be properly monitored and secured.

I had a lot of reservations about companies installing Crowdstrike but I'm baffled by the lack of security awareness in many comments here. So they do really seem necessary.


It must be security tags on the lift which restrict entry to authorised staff.


who's allowed to use the lift? where do those keycards authenticate to?


Because there's some level of convenience involved with network connectivity for OT.


That sounds...suboptimal.

I would imagine they used specialized controller cards or something like that.


They optimize for small batch development costs. Slapping windows PC when you sell a few hundred to thousand units is actually pretty cheap. Software itself is probably same order of magnitude, cheaper for UI itself...


And cheap both short and long term. Microsoft has 10 year lifecycles you don't need to pay extra for. Linux you need IT staff to upgrade it every 3 years. Not to mention hiring engineers to recompile software every 3 years with the distro upgrade.


Ubuntu LTS has support for 5 years, can be extended to 10 years of maintenance/security support with ESM (which is a paid service).

Same with Rocky Linux, but the extra 5 years of maintenance/security support is provided for free.


thats just asking for trouble.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: