Hacker News new | past | comments | ask | show | jobs | submit | bicolao's comments login

With XCOM 2 if you miss a tile in the last turn, you lose the mission. It's not a great execution even if it does apply time pressure. Long War or other mods handle this much better by bringing in reinforcement, so you can't stay for long, but you won't lose just because you miscount the tiles.


seems easily worked around with base64 and friends.


Gentoo Linux


I think you can see a modern CPU as a network. There are some beefy servers doing all the heavy lifting which is what the outsiders see. But there's also a few smaller servers here and there monitoring the system (or even responsible for powering on the entire network).


Author here. This is very much the case for a computer system as a whole also. Basically a network of cooperating microprocessors, including in I/O peripherals etc.

PCIe in particular is literally a packet-switched computer network - it has a physical layer, data link layer, and a transaction layer which is basically packet switched. There are even proprietary solutions for tunnelling PCIe over Ethernet.


To make it even funnier - Digital's last Alpha CPU, EV7, which was essentially the ancestor of AMD K8 (which finally brought "mesh" networking to mainstream PCs), actually had IP-based internal management network!

Each EV7 computer had, instead of normal BMC, a bigger management node connected to 10MBit ethernet hub (twisted ethernet, fortunately :P), and this network was then connected to things like I/O boards, power control, system boards... including to each individual EV7 CPU. Each so connected component had a small CPU with ethernet that was responsible for interfacing their specific component to the network, and when the system booted part of it involved prodding the CPUs over ethernet to put them into appropriate halt state from which they could start booting.


This kind of thing with functional domains accessible over Ethernet existed in at least one laptop as well, where you could connect to the "nodes" once you busted into it (my article): https://oldvcr.blogspot.com/2023/04/of-sun-ray-laptops-mips-...


And you have smaller one that basically pxe boot the bigger one and manage the power, cooling, etc. It is datacenters all the way down.

As someone that used to do embedded, there is a reason i felt most at home in erlang and elixir.

Their processes that share nothing and use message passing was really close to how it looks to build and code for an embedded platform.


Curlhammer 40k is the ultimate one.


It's part of the language [1] [2]

> Builtin operators && and || perform short-circuit evaluation (do not evaluate the second operand if the result is known after evaluating the first), but overloaded operators behave like regular function calls and always evaluate both

[1] https://en.cppreference.com/w/cpp/language/operator_logical [2] https://en.wikipedia.org/wiki/Short-circuit_evaluation


Note that (CPP reference will also tell you) this only applies to the built-in operator.

If the operator has been overloaded which is easy to write in C++, then too bad - the overload doesn't short circuit.

I think if you can't figure out a way to preserve the short-circuit feature then having a way to overload this operator in your language is stupid. It is, I would say, unsurprising to me that C++ did it anyway.

EtA:: It feels like in say Rust you could pull this off as a trait LogicalAnd implemented on a type Foo, with a method that takes two parameters of type FnOnce() -> Foo , and then the compiler turns (some_complicated_thing && different_expression) where both of the sub-expressions are of type Foo into something like:

  {
    let a = (|| some_complicated_thing);
    let b = (|| different_expression);
    LogicalAnd::logical_and(a, b)
  }
To deliver the expected short-circuiting behaviour in your implementation of the logical_and method, you just don't execute b unless you're going to care about the result.


Or this [1]. A bunch of gauche.* modules are not listed there.

[1] http://practical-scheme.net/gauche/man/gauche-refe/Module-In...


> A disadvantage of the RISC design was that since programs required more instructions, they took up more space in memory. Back in the late 1970s, when the first generation of CPUs were being designed, 1 megabyte of memory cost about $5,000. So any way to reduce the memory size of programs (and having a complex instruction set would help do that) was valuable. This is why chips like the Intel 8080, 8088, and 80286 had so many instructions.

> But memory prices were dropping rapidly. By 1994, that 1 megabyte would be under $6. So the extra memory required for a RISC CPU was going to be much less of a problem in the future.

I think this is still a problem? Not memory size, but the speed of the memory bus.


>A disadvantage of the RISC design was that since programs required more instructions,

It's important to understand that, while that was the case back then, today, RISC-V has best 64-bit code density, whereas best 32-bit is still ARM thumb2, with RISC-V being a close 2nd, and actually better with current B and Zc extensions, to be ratified soon.

This is achieved with very little complexity required in the decode, which becomes a net win if there's any ROM or cache in the chip.


Caches are also very much size-constrained.

Compressed instruction formats phase in an our of favor to this day, trading size for decode complexity.


Windows is not affected because it uses IBRS instead, which is less performant than retpoline. So the question is, is IRBS overhead less or more than 28% now?


It begs the question, was speculative execution a good idea in the first place?


I highly suspect that a comprehensive accounting would show that any basic hardware optimization like speculative execution has saved the total worldwide computing industry hundreds of millions of dollars, if not more. At large scale, optimizations add up to enormous value.


For the metric of performance/Watt and performance/cost, absolutely.


It's one of the fundamental techniques that allows high end processors to run fast. If you're happy with a desktop processor that runs at the speed of decades-old processors maybe you don't need it.


Compared to what alternative?


slower processors with more efficient software? For as far as we've come with hardware, the actual user experience hasn't changed anywhere near as dramatically because faster machines just allowed people to push out slower software, the same way that hard drives moving from MBs to GBs to TBs just meant that games and applications bloated up to fill all available disk space and websites bloated up to consume increased bandwidth as we went from 56k to broadband.


The reason why games are tens or hundreds of gigabytes in size is that modern textures are high-resolution and models are high in detail; if you took away high-capacity storage, then you'd just end up with worse-looking games. The reason why Web sites are big is largely images, videos, JS, and CSS; if you took away broadband, you'd end up with worse-looking and less functional Web sites. It's like the broken window fallacy in economics: the fact that having limitations gives smart developers an opportunity to work cleverly within them doesn't change the fact that limitations leave everybody worse off overall.


> The reason why games are tens or hundreds of gigabytes in size is that modern textures are high-resolution and models are high in detail;

not "the reason", just "a reason". A lot of it is just laziness. Not compressing audio and video or doing a very poor job of it. For example a Fortnite update took the game from 60GB to 30GB without making it look like garbage so how did they do it? Optimization. Why didn't they do it sooner? Because they didn't care.

> The reason why Web sites are big is largely images, videos, JS, and CSS; if you took away broadband, you'd end up with worse-looking and less functional Web sites.

Video and images are again "a" reason (and again often the issue is poor compression) but so are bloated JS frameworks, user tracking, and ads. Even very simple websites can bloat to grow larger than full novels. (there are some good examples of this here: https://idlewords.com/talks/website_obesity.htm). You could cut the ads and the tracking and the JS bloat without any impact on the content delivered or the functionality.

People just don't want to take the effort to lower file sizes which is why people have to turn to things like repackers who can cut the sizes for downloads by more than half. Somehow they manage, and they do it for free no less, but game publishers can't?

We could do much better without sacrificing anything (that users care about) in the finished product. If we had to go back to slower processors people would be forced to care enough to write better code and optimize for speed. At least until some new trick for faster speed was developed at which point very little in our lives would be faster, but the code would be slower again.


Also, it does no good to compare the best software of the past with the average or worst software of the present. We tend to forget the average software of the past, and I would guess that most of us weren't exposed to the worst of it.

Inefficient software has been a problem for practically all of the history of personal computing. I'll illustrate with two anecdotes:

I was in high school when Office 97 came out. I only have a vague memory of this, but I do remember one of my classmates complaining that it was sluggish on whatever computer he had at the time.

The first commercial product that I shipped, in 2002, was a desktop application written in Java (though, as shipped, it only ran on Windows). I didn't do most of the original development; it was developed by an outsourced development shop, and then I had to take over the project. Whether on my underpowered 366 MHz laptop or my boss's much more powerful machine, the app took considerable time to start up, so much so that we put in background music during some of the startup process (the app was a self-contained audio environment for blind people, so that was our equivalent of a splash screen). I never really dug into what caused the slow startup, but in hindsight, I would guess that it was the late-binding nature of Java, particularly the fact that classes had to be loaded individually as they were first used, often from multiple jar files, not to mention loading native code from multiple DLLs. The peanut gallery here may say the app should have been a statically linked native executable, but for all practical purposes, that would have meant C++, and if that had been a hard requirement, the app would never have shipped at all. And while we struggled to sell that particular app, it did have some happy users (indeed, for some of the target users that we did manage to reach, the app was positively life-changing in its impact), so I don't regret that we shipped it in its inefficient state. If the same app were developed today in Electron, with any decent build setup, I'm guessing it would be up and running in a few seconds.

Whether in the 90s, the 2000s, or today, most development teams have never had the resources to produce the highly optimized gems that we fondly remember from the past that we so often pine for. But the software that we do manage to ship is used and even enjoyed anyway. And, to bring this back to the original discussion, advances in hardware enable more under-resourced teams to ship more useful and enjoyable software that would otherwise not exist.


> for all practical purposes, that would have meant C++, and if that had been a hard requirement, the app would never have shipped at all.

This is a really good point. Slow software certainly has its place. Not everything needs to be as optimized as possible. I don't think that the loss of speculative execution would put us back so far in terms of performance that it would hurt slower languages like java or python, but I think it might encourage putting more effort into optimization and probably create more interest in lower level languages. It might even lead to new creative approaches to speeding things up. That said, I'd really rather processors stay fast if they can do it while still being secure.


I see it as a ladder of reducing complexity. There is you, power Joe and regular Joe. You can write a program in Assembly or C with syscalls and char ptrs, but then power Joe cannot. You both can write a program in C# or Java with their runtimes, but then regular Joe cannot. All of you can write electron and pyqt.

If we didn’t climb from KBs/MHz to GBs/GHz, only few vendors could ship their software, and that would suck even more.

For some reason there is no simple compiled language with simple but powerful runtime which could do the same thing electron does in KBs/MHz. It is not unrealistic and I think the problem lies within us, our methodologies and tradition to overcomplicate everything we touch. So anyone who tries to make a ladder has to cut through layers and layers of nonsensical abstractions, sacrificing performance and memory here and there, and only then you get something that business people can use.


Could the alternative have been many, simpler cores, and an earlier move to high core counts?


Well it is the bread and butter of any consumer CPU after Pentium. If you want to run at the speed of early 90's chips i.e. not even fast enough to play MP3 files without specialized hardware.


> since you can't really decrease your temperature you might as well learn to live at the upper extreme

There is a MinuteEarth video about this YouTube if I remember correctly and the reason is well.. more reasonable. Our body works better at higher temperature, but of course if it gets too high everything breaks down. So the optimum temp is somewhere closer to the upper limit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: