Hacker News new | past | comments | ask | show | jobs | submit | more throwaway458864's comments login

It's unfortunate that the development of the web has had an adversarial nature. There's been a war between those individuals who prize privacy, and organizations that want functionality.

The law requires certain things. If your protocol doesn't account for those things, then your protocol will be broken to bend to the law's will. It would often be much better to have some small compromise in privacy, rather than lose it all. "All or nothing" has some extreme outcomes.

Yes, some people do want privacy at all costs. But what about the rest of us? We send postal mail in envelopes and leave them sitting in boxes open to the street. Our phone calls traverse networks unencrypted and are overhead nearby. Our credit cards and secret PINs can be input at public facilities that enable stealing. Our laptops sit at home or work and can be broken into and memory dumped for encryption keys. In practice, 99% of us are completely fine with an acceptable risk of a possible loss of privacy. We help bolster this with laws and punishments should someone violate our privacy. But what we don't do is engineer our lives as if we're all spies hiding from an execution.

There are practical changes that could be made to allow for better functionality, whilst not having absolute privacy at every conceivable technical level, but still more than enough privacy that what we care about most is still reasonably private. Then there might be enough mild privacy lost to enable organizations the functionality they need, and we would lose less to the "all or nothing" consequences.

The thing is, there is an extremely small number of people who have the privilege and power to change things, because they're in the room and we're not. Like the generals carving up Africa because they happen to be in the right room. Personally, I think these decisions have fallen to a few people in a room for far too long. I think we should have public, wide ranging discussions about the nature of how we build the underpinnings of our world. If we don't, the consequences could be more "all-or-nothing" that ends up harming more than otherwise.


> The thing is, there is an extremely small number of people who have the privilege and power to change things, because they're in the room and we're not.

Which rooms? In a lot of cases the situation is that you didn't bother to show up. Not always, but probably more often than you realise.

The IETF Working Group where TLS 1.3 was designed for example is just an IETF activity. You can literally just do that, it's actually probably harder to participate in Hacker News.

The "Root Trust Stores" are notionally controlled by a handful of tech businesses. Google, Apple, Microsoft. But, wait, Mozilla also controls one of these "Root Trust Stores" for Firefox and in practice for the Free Unix systems and most Free Software, and what do you know, since they decide behind closed doors we don't know how Google, Apple and Microsoft decide what to do - maybe they each have a thousand smart people deciding - but it sure does seem like they watch what Mozilla does and largely do the same thing. And how does Mozilla decide? An entirely public discussion m.d.s.policy. You could participate in that discussion today.


Might as well ask the hungry to start their own farm


Ideally you’d both remedy the current situation but also start building for the future.

The best time to start a business was five years ago, the second best time is now.


For the past year it's been a hirer's market. Applied to 30 positions I'm perhaps overqualified for, all form letter rejected. Recruiters ask me for a call, I reply back and say what's a good time, ghosted; we actually scheduled a time, I'm ready and waiting, ghosted. The positions are listed but nobody's in a hurry to fill them.


> A lot of leadership stuff is universal, but then a lot of it is also dependent on what's needed for the job, and a person's skills and leadership patterns may not be exactly what's needed for the job.

Leadership is what's needed for leadership jobs. It's in the title. All leadership is the same: inspire the troops, block the bullshit, elevate the good shit. How you do that changes by rank.

> The Peter Principle is that if you do a good job you get promoted, as you get promoted it gets harder, eventually you reach a point where your skills aren't enough to overcome the next level of difficulty increase.

The Peter Principle isn't about difficulty, it's about skill set. As you climb the ranks you need a different skill set. The job isn't harder, the job is different.


> The Peter Principle isn't about difficulty, it's about skill set. As you climb the ranks you need a different skill set. The job isn't harder, the job is different.

This is a great observation and think it isn't very well described when talking about the Peter Principal. My Dad retired as a Colonel and required a LOT of political skills. He said he wasn't really interested in the politics of being a flag grade officer and thought he was too old to learn them. Me, on the other hand, never progressed past small unit command. And never even got to the point where politics were a major part of my job. In my unit we were all just trying to not get killed and find opportunities to use the logistics training we received.


> Leadership is what's needed for leadership jobs. It's in the title. All leadership is the same: inspire the troops, block the bullshit, elevate the good shit. How you do that changes by rank.

A big part of leadership, which might be covered under your 'block the bullshit' point, is fighting the higher level managerial battles, and only relying on your lower level staff for their specialized support.

If there is one thing I hate about some managers is throwing their employees to fight political battles with other managers, or high level Executives, while the manager hides in the bushes.

The manager's job is to fight those battles, and yet I've seen them hide from them a lot, while using their workers as shields.


> If there is one thing I hate about some managers is throwing their employees to fight political battles with other managers, or high level Executives, while the manager hides in the bushes.

The “never bring a knife to a gunfight” rule of management.


Assuming you mean the connections between the components - a hodge-podge of different models, tools, techniques. There is no one way to do it, partly because of how different any given system can be from another. Even within software engineering, it really depends on the industry you're in, the application of the software, the stakeholders, the risks.

But generally speaking, most people only track the connections at design time, as an artifact of overall architecture. And this isn't great, because as the system changes (modern software systems change constantly) the entire system development lifecycle is not being re-assessed every time some component changes.

So in the best case, with a Waterfall model, you have very well defined connections in design, and you have to pray that your SDLC validates that design. But most people prefer Agile, which in practice means "I don't need a well defined system! #YOLOEngineering". So everything is built ad-hoc and nobody even attempts to figure out the entire picture. And in that case, Operations may be told to figure it out (they're the ones running it all, so they have the best vantage), and they tend to implement monitoring and distributed tracing that enables cobbling together a picture of how things are actually working. But that's not fed back into teams' designs, it's just used for addressing problems after the fact.

To be specific: you might use ADRs and manually crafted diagrams to map out the connections, or UML, or some other systems diagramming tool/standard. But often that's created only at a certain level of the system, and doesn't dive deep into component interfaces or tolerances/limits or availability. So the full picture can never be seen from one view, and it's almost never the teams themselves mapping it out.


That's exactly what I meant. For standardization, does Kubernetes help in that regard? For example when using network rules to whitelist what component is allowed to communicate with what service? I imagine extracting the current rules and building a graph makes discovery easier. No tolerance/limits/throughput or availability data is included though. The approach is also limited to the cluster level, excluding out-of-cluster communication, while having everything in the cluster may not be that secure.


You're spot on, it would provide limited information. In fact, it may be better to use a network monitor to trace network connections and graph that. Old network rules stick around, and so a graph of just the rules would show you connections that may not exist. And network rules are often made of CIDRs or port ranges, so it's not telling you what actual nodes are receiving traffic. If the CIDR and port range includes multiple networks with multiple components each, you don't really know what's connected to what. Distributed tracing is basically that from the application layer (and includes network calls).

Like yourapostasy says, this kind of post-hoc system design can lead to fallacies, and doesn't contribute to the initial design of the system. If you have nothing else to go on, it helps. But your time is probably better spent investing in formal specifications, and then developing components, connections, and all the operational aspects as implementations and validations of the specification.

Many papers have been published about this, spanning from the 70s to the late 90s, talking about the evolution of software systems engineering. After the 2000s, software engineering became more art than science when the Agile Manifesto gave everyone an excuse to stop caring about rigor.


Oh, ho ho. It is so much more than network dependencies. K8s helps somewhat by pointing a possible direction, but this is truly an Alice in Wonderland, "just how deep into the rabbit hole do you want to go?" problem space. Note the following is from the big-org perspective, small organizations don't really have this problem nearly as bad, but might start seeing this more as we all move into the cloud.

IMHO, the declarative configuration management folks have their heart in the right place, but at their level we've already lost a lot of information and are just shoving around peas on the plate. Post hoc systems information capture is always a lossy, imprecise, empirically-driven affair. Service registries are only scratching the tip of the iceberg.

Everyone is afraid to bite the bullet and start Encoding All The Things, because down that path lies religious wars over what to encode and how to express the encoding. Even with a service registry, I lack information on SLO's, SLA's, RTO's, RPO's, planned outages, A/B (and C/D/E/...) state, ownerships of all kinds, responsibilities of all kinds, architecture, deps of all kinds, onboarding steps and constraints, governance gates, decomm steps and constraints, change approval gates, the timing of each of those, and so on. That's just capturing the information; now imagine the insanity of walking that nightmare graph to seek impossible interlocks (which we humans accept by overriding with outages, for example), or figure out just how long it should take to accomplish a given set of related goals.

We currently handle this as an industry through blunt force trauma on the problem space itself, while contorting ourselves as Matrix-like as possible to sustain as little in return upon ourselves in the process, through a hodge podge of techniques, tools, processes, and exasperation. At this point, I'm not exactly certain we'll fully address this space without a Culture Mind-level AI (said tongue in cheek, I really do think there is some promising work being done in this field, it is just a grind).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: