Hacker News new | past | comments | ask | show | jobs | submit | xg15's comments login

The superfans seem to be alive and well, at least here on HN. Just open some arbitrary thread here about a bad Apple policy decision.

Last instance of this that I noticed was the sudden, unannounced loss of support for self-signed certs for IMAP ("totally reasonable, who still uses those anyway?"), but there were earlier threads, e.g. about the possibility of 3rd-party clients for IMessage ("huge security risk!") etc etc.

I find it noticeable, became there is often such a jarring difference in those threads to the usual prevailing stances of the HN community on those issues. I'm pretty sure, Google, Microsoft or Facebook would be raked over the coals for the same decisions.


Apple fanboys earned their mockery, but the pendulum is definitely in the other direction right now (at least on HN). Simply disagreeing in Apple's favor on anything is enough to be branded a superfan. I get accused of it, and I've never even owned an iPhone or Mac.

In this thread, the anti-Apple replies lean much more toward the childish internet-shitposting style that is atypical of HN. E.g.:

"shilling"

"simping"

"I see your reading comprehension skills seemed to have magically changed yet again in the last five minutes."

And so on.


I think what was meant were the direction of travel of the wave as one axis and the height as the second axis. The third axis would then be the direction along the length of the wavefront, as seen from above.

The assumption was that this third axis was irrelevant with respect to the wave's breaking behavior and max height - so simulating waves in a narrow channel of water would be the same as simulating waves in an ocean (as far as max height is concerned).

This paper now showed the assumption is wrong, and interactions parallel to the wave front (or coming from yet other directions) also influence the max height.

At least, that's my understanding.


My SO is in the physics field and I can tell you what is almost certainly the sad real reason. It was easier to publish/finish the work with only two dimensions. The results were "good enough" , after that it became standard and people just built on or referenced that work. So they too could publish a paper.

There was an article on top of HN for a while this week talking about academic fraud and such that has more unfortunate info on the cutthroat nature of research.


Physicists are famous for simplifying reality. See 'spherical cow in a vacuum'. This isn't fraud or laziness. it is just that reality is complicated and messy and these sort of simplification are often necessary for making any sort of progress on a problem.

I mean, there is nothing wrong with simplifying your assumptions - spherical cows etc - if it let's you avoid a whole lot of additional complexity and still gives a useful model - and I think that was the case for the "2D waves" assumption. So I wouldn't immediately call it fraud.

The danger occurs if people take the model as orthodoxy and dismiss any deviation as impossible.

I think in the case of "freak waves" you could actually watch the change in public attitude over the last decades: They used to be seen as physically impossible and basically sailor's yarn - until eyewitness accounts kept accumulating. Eventually, we got empirical evidence in the form of satellite images and they were accepted as a real, if unexplained phenomenon. And now we seem to be getting the first verifiable theories that offer causal explanations and allow to predict and reason about them.

All in all, this seems like a good example of scientific progress to me (except it would be nice of we could get to this point earlier in the future with fewer lifes lost).


All models are false but some are useful.

Spoken like a true LLM.

Can you post a screenshot? I don't see anything, but I don't think that player let's me find the correct location either...


Looks like a 3-d printed 40 teeth gear.

https://sariel.pl/2009/09/gears-tutorial/


Yeah, that looks 3D-printed. I wonder though why he would 3D print a standard part that is also easily obtainable online. (Unless he had a printer already available and it was easier to print it than to buy one)

OT: I find it interesting that the Technic branch of Lego seems to increasingly separate itself from the rest of the franchise, design-wise - to the point they got rid of the actual bricks!

In older models, there used to be the occasional "flying" section that was built exclusively out of axles, beams and linkages, but the main support structures were still mostly made out of "traditional" Lego bricks (albeit with holes in them).

With recent models, they seem to have made the "flying" style the norm and the standard bricks the exception.

I wonder if this is some indication of Technic becoming its own thing independent of Lego.

(I only noticed the design changes, I have no idea if there are some company politics behind it - but if there is more information I'd be interested to know)


...apparently the community term is "studless" and the design change already occurred in the late 90s or early 00s!

I feel old now.

https://bricks.stackexchange.com/questions/1912/why-does-leg...


And the Apple fanboys are loose again...

Regardless how your opinion on PKI and self-signed certificates is, shouldn't we at least be bothered by the fact that Apple just switched off this feature without any communication whatsoever? The community was literally in the dark about whether this is an official policy change or a bug.

Google, in situations like this, at least made some corpospeak press release officially "sunsetting" the feature and provided an official deprecation timeline so users have time to adapt.

Apple is apparently just leaving their users stranded and unable to access their email.


I suspect it's worse than that.

Since the UK's Investigatory Powers Act 2016, I've noted that every web browser is necessarily an end-to-end encrypted communication system.

This isn't compatible with what all the spy agencies want. The US can kinda get past that with the reporting obligation for anyone publishing on an app store controlled by a US company. (As a British citizen living in Berlin, the corresponding checkbox when publishing apps is mildly infuriating).

Now that Apple is obligated to allow competitors, that doesn't work. Or perhaps the agencies finally noticed that this problem applies to websites and not just apps (perhaps web apps are finally good enough?)

So the agencies find another way — and this time it comes with an obligation to not report what they're doing.

This smells like that other way.

Might not be correct, but intelligence agencies' long-standing history means it's not paranoia.


You'd have to manually trust the MITM cert again? Which you certainly would not do as you know you didn't create a new self-signed cert in that moment.

I love how the entire free PKI ecosystem is now relying on one single company.

It’s not. There’s LetsEncrypt, ZeroSSL, BuyPass, SSL.com, and Google Trust Services[0]. The ACME protocol is standardized and you can point your client at any of these at any time, and other providers can begin providing certificates at any time. Some tooling[1] even uses other providers by default.

[0] https://acmeclients.com/certificate-authorities/ [1] https://github.com/acmesh-official/acme.sh/wiki/Change-defau...


Haaretz reports that the devices were purchased only recently - and heated up before detonating. [1]

So, that sounds like indicators for both, either a supply-chain attack or malware targeting the battery.

[1] https://www.haaretz.com/israel-news/2024-09-17/ty-article-li...


How much stuff is there to fuck with in a standard, untampered-with pager? Seems unlikely this was pure cyber (and some novel battery hack). And if you need supply chain interception to carry out the attack in the first place, why wouldn't you insert explosives? There's a history of these kinds of attacks.

Yeah, agreed. Also agreeing with the sibling posters that the videos that emerge look nothing like batteries catching fire but rather like actual detonations. Nothing an untampered pager should be able to do.

I think microservices can be useful for nontechnical reasons: They let you take the "org chart becomes architecture" from an unseen, not really understood force into something explicit that you can observe and manage.

Instead of multiple teams working on the same codebase and stepping on each other's toes, each team can have clear ownership of "their" services. It also forces the teams to think about API boundaries and API design, simply because no other way of interaction is available. It also incentivices to build services as mostly independent applications (simply because accessing more services becomes harder to develop and test) - which in turn makes your service easier to develop against and test in (relative) isolation.

However, what's of course a bit ridiculous is to require HTTP and network boundaries for this stuff. In principle, you should get the same benefits with a well-designed "modulith" where the individual modules only communicate through well-defined APIs. But this doesn't seem to have caught on as much as microservices have. My suspicion is that network boundaries as APIs provide two things that simple class or interface definitions don't: First, stronger decoupling: Microservices live in completely separated worlds, so teams can't step on each other's toes with dependency conflicts, threading, resource usage, etc. There is a lot of stuff that would be part if the API boundary in a "modulith" that you wouldn't realize is, until it starts to bite you. Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module, write into a database table, etc. With network/process boundaries, this is not possible in the first place.

It's a whole bunch of very stupid reasons, but as they say, if it's stupid and works, it ain't stupid.


>org chart becomes architecture

Tbh, it didn't work for us: our org chart changes more frequently than the codebase's architecture (people come and go, so teams are combined, split, etc. to account for that, many devs also like rotation, because it's boring to work on the same microservices forever), so in the end basically everyone owns everything. Especially when to implement a feature, you have to touch 10 microservices -- it's easier and faster to do everything yourself, than to coordinate 10 teams.

>Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module

This is solvable with a simple linter: it fails at build time if you try to use a private method from another module. We use one at work, and it's great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: