Hacker News new | past | comments | ask | show | jobs | submit | cachvico's comments login

It's incredibly ironic


I think it's fair and reasonable to assume that the AI companies will at some point start licensing their source content. Through gov/legal oversight or not remains to be seen, but OpenAI are already beginning to do so:

https://searchengineland.com/openais-growing-list-of-partner...


Google is using for 20 years unlicensed source content for their search snippets, they seem to be doing fine with it (with the exception of few news publishers).


The idea with internet search was to get people to find the source of the information they were searching for. As a matter of fact a lot information indexing was requested at the source. Google did respect the bargain for a while until they started to obfuscate getting to the source with AMP and their info snippets directly in the search, bypassing redirecting to the source. Then they started not displaying all that info at all, not even on the nth page of search results. The broth has been getting sour for a while now. Some people never wanted crawlers indexing and there were numerous discussions about how those robot.txt were ignored.

So what I see here is the historical trend broken bargains which is more or less digital theft


Thanks for the link, I appreciate it. I suppose the issue is that this just further enshittifies the internet into a small handful of walled gardens. Big players get their payday, because they could feasibly sue OpenAI and generate them enough headache. But the vast amount of content on the internet was not built by a small handful of media companies, but rather by masses of small creators. It is their work that OpenAI is profiting from and I have yet to see a credible suggestion on how they will compensate them.


The likely and rather sad outcome of all this is small creators stop publishing because what is the point if they think their work is going to be regurgitated by some AI for $20/month.


So basically you're saying that the 3 day gap between your first post and your demonstrated understanding is actually reasonable, assuming you spent those 3 days immersed in understanding the code & design at play.



This is (predictably) wrong.


hah


It can be done, it was done, but it doesn't scale.

If you want layers of failsafe and redundancy (as we would do it today), it requires higher level abstractions, e.g. writing in at least C if not C++ or Rust, instead of hand-coded assembler like they did back then.

So yes simple got us there, but it's not useful to repeat the exercise like that again.


This really proves my point.

I suggest that maybe the key to success is limiting ourselves to simpler TTL logic and make up for it by adding 10% additional material. Immediately somebody responds that TTL logic is the old way and can't be as good as modern C, let alone Rust.

So now instead of a few hundred large, durable transistors and relays which shrug off radiation and heat and voltage spikes and a have few enough states that they can be formally proved correct, we need delicate 30+ MHz microprocessors which need special radiation hardening and which will go up in smoke if their signal lines transiently exceed 10 volts, and runs a couple million lines of code.

The arguments here for Rust aren't even wrong, which is the problem. In theory Rust would be better than TTL logic in every way: easier, cheaper, lighter, more capable, more logging, updatable. Professionally TTL is an argument which can't be won and is therefore career limiting to make, so finesse wins out.

Yet large projects of every type keep 'mysteriously' failing due to "unforeseen difficulties".


With a statement like that one wonders if Hezbollah leadership itself has been infiltrated.


Israel has an intelligence agency that's generally recognized to be quite competent. I'm sure it would have taken them approximately 5 minutes to learn from their many spies that a new form of communications is being used.


[flagged]


Not really: https://www.nytimes.com/2023/11/30/world/middleeast/israel-h...

They just didn't realize how real this particular plan was. It's quite likely Hamas is regularly bluffing similar attacks in many potentially-compromised channels. If they then see Israel react, they've received info about Israel's capabilities & tactics – and maybe even ways to misdirect Israeli forces with future feints. And any real plans may stay hidden amongst dozens/hundreds of other "false alarms".


Hamas =/= Hezbollah, even though they have a common enemy and their alliances overlap.


Yes, but I believe the post to which I was replying used "the initial attack" that Israel "failed to see coming" to refer to Hamas' October 7th attack.

Hence my reference to Hamas' likely chaffing of Israel's intel, and Israel's false-alarm fatigue, before that "surprise" attack.

Of course Hezbollah likely uses similar tactics - but if as deeply pierced as this latest attack implies, they'd have to wonder if any comm successes they thought they had were just Israel leading them on. I doubt Hezbollah has surprised Israel recently!


The IRGC itself has clearly and spectacularly infiltrated; read the details of the Haniyeh attack. So, yeah, at this point I don't think anyone in the IRGC network can trust anybody else. The messaging here is pretty intense.


Theoretically, pagers are simpler devices, meaning it would be much easier in principle to analyse both hardware and software to check for issues, unlike a mainstream phone OS which has a bigger software attack area and can have at least some zero day attacks known to a state level actor like Israel.

Although, if it really was explosive inside the pagers it seems Hezbollah didn't do this.


I mean Israel is behind Pegasus, so it's not exactly a secret that they can get into any cellphone. Israel didn't need to infiltrate Hezbollah, they just needed publicity.


You can also find out what your target is using, find some exploits in it and publicize them, then offer a super good deal on an upgraded model.


It can be pretty awkward to work with, although I've not found anything comparable, free, and cross-platform.

Sometimes the FK links in cells break. Other times views don't get invalidated when pressing refresh until a reboot. There are different refresh buttons all over the place for different contexts. Lots of minor quibbles despite almost weekly updates. It does have a gamut of features though - and it's free and cross-platform though, so it's one of those love-hate relationships for me.


Haskell?


Last meetup I saw it was an OCaml meetup.


Cardano?


Similarly, found a bug in clang around 2010 that would only happen with max optimization. Actually did manage to track down the root cause; an array access would fail if the index > 255. It went something like this: on ARM (this was building for iPhone) the LDA (Load Accumulator) instruction can store the memory offset (array index) within the instruction itself if it would fit within 1 byte, otherwise the offset would have to get loaded from a memory location pointed to by a given register. One of the two cases was faulty.

Was just about to report this, but my Mac got upgraded to the next version of OS X, which magically solved the problem. What does the OS upgrade have to do with compiling? In the world of Macs, Xcode was also upgraded along with the OS, and in the newer version it was already fixed. Dangit!


Brings a whole new meaning to bus factor


I’ve never met a GPU that could survive getting hit by a bus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: