> ...there has been a virtuous cycle between software and hardware development. CPU hardware improves, which enables faster software to be written, which in turn...
This is the exact opposite of the experience I've had with (most) software. A new CPU with a higher clock speed makes existing software faster, but most new software written for the new CPU will burn all of the extra CPU cycles on more layers of abstraction or poorly written code until it runs at the same speed that old code ran on the old CPU. I'm impressed that hardware designers and compiler authors can do their jobs well enough to make this sort of bloated software (e.g. multiple gigabytes for a word processor or image editor) succeed in spite of itself.
There are of course CPU advancements that make a huge performance difference when used properly (e.g. SSE, multiple cores in consumer machines) and some applications will use them to great effect, but these seem to be few and far between.
It's probably fair to call that a reasonable characterization of the Go's "home problem domain", though. Contrary to popular belief that "cloud" means you can just write crappy code and throw cheap hardware at it, when you are truly working at cloud scale you actually try to write software as lean and mean as possible, because everybody cares about 10x and 100x differences in the amount of hardware that a particular cloud service takes.
Yes, desktops continue to be gluttonous hogs, because you'd rather have your software now than glorious software three years from now (which you can't have anyhow because the company went out of business trying to polish it instead of releasing it). Lately "mobile" is really pushing it, I think. But in Go's wheelhouse, efficiency has actually manifested.
I've often thought that the there is less difference than you'd think between embedded programming and cloud programming; both groups of programmers may literally be counting cycles and watching their L1 caches. It's those in the middle who have more power sitting around than they know what to do with who can afford to be a bit "lazy".
I'm not sure users really care how many gigabytes their word processor is. How fast it is is probably more interesting to them. And while wasting CPU cycles on abstraction layers isn't a great way to make a super fast program, if the program still runs at 60fps and took half as much time to develop, then maybe they're worth it.
Of course, when you end up with some standards-driven monstrosity like a modern web browser, you do seem to have a lot of unnecessary abstraction layers and also it's slow.
No, it'd be way faster to just do it all in assembly. These fancy "high level languages" and "memory management" and "libraries" are just cons foisted on poor unsuspecting programmers by middle management and enthusiastic marketers. Real Programmers (TM) don't need any of that shit.
You don't seem to understand what "abstraction" means in computer science. Hint: it's nothing to do with memory management. High level languages like Python/Ruby actually have less abstractions than lower level languages like Java because they don't need them, and Python/Ruby programmers tend to want to get stuff done rather than write a ode to the Gang of 4 in XML.
Well if I asked your average brain-dead Java developer it would be to make your code more "generic" so you don't have to change a single line of code when requirements change, just tweak some XML somewhere!
And if there is one thing that Java developers are not, it is productive. I will usually be finishing off a project in Python while they are still coding getters/setters on their AbstractProxyFactoryFactory class.
Yes, Java developers still hand code their getters and setters. What era are you from, again? Also, those "brain dead" Java developers still write code that smoke your dog slow Python code regardless of how meticulously you hand crafted your code. So yeah, I would be bitter too.
Uh, the one following hordes of brain-dead Java developers hand-coding getters and setters?
> Also, those "brain dead" Java developers still write code that smoke your dog slow Python code regardless of how meticulously you hand crafted your code.
And is delivered 2 years later, requires 5 times more people and costs 10 time more to develop. And Python can be plenty fast if you use the right libraries.
> So yeah, I would be bitter too.
What do I have to be bitter about? I get paid well to write Python, C and Go on bespoke and interesting back-end systems and don't have to attend daily stand-ups with brain-dead Java developers and Oracle DBAs and listen to them duke it out over who's fault it is queries are running slow. No thanks.
>Uh, the one following hordes of brain-dead Java developers hand-coding getters and setters?
I can only image what they think of you and it's probably not very good. You must be a nightmare to work with and be around. Which is probably why they keep you away from people.
>And is delivered 2 years later, requires 5 times more people and costs 10 time more to develop. And Python can be plenty fast if you use the right libraries.
Sounds very anecdotal. This seems to be your modus operandi. I could say the same thing about worthless Python developers. No wonder that language is in decline and seldom used in the enterprise.
>What do I have to be bitter about? I get paid well to write Python, C and Go on bespoke and interesting back-end systems and don't have to attend daily stand-ups with brain-dead Java developers and Oracle DBAs and listen to them duke it out over who's fault it is queries are running slow. No thanks.
Again with the idiotic anecdotes. You must have been exposed to a work environment that was not congruent to that of your own, but somehow think it's the blueprint. But, make no mistake, you are a bitter man and if Java contributed to that then I'm thankful for its existence.
I agree that IDE-generated code is a bad sign (though honestly a lot of use of getters/setters is brain-dead - they make sense for a library but in application code public fields are fine). But Kotlin means paying all the costs of using Scala (which is already production-ready and more widely supported) but getting very few of the benefits.
You're comparing libraries here. There are libraries in Java which are not over-engineered. That being said, even if all Java libraries were over-engineered, it would not make you correct. Correctly crafted abstractions makes you more productive.
It's just far easier to write bloated software than it is to write efficient software. Particularly in the "just get it done" atmosphere most programmers operate under.
I would say it's a trade-off between writetime efficiency and runtime efficiency. The trend has been that runtime efficiency matters less and less. Compare the modern idea of "efficient" software to the binary that early machines were programmed in.
With Moore's law possibly ending and current memory access speeds though, it may be starting to matter a lot moore.
> This is the exact opposite of the experience I've had with (most) software. A new CPU with a higher clock speed makes existing software faster, but most new software written for the new CPU will burn all of the extra CPU cycles on more layers of abstraction or poorly written code until it runs at the same speed that old code ran on the old CPU.
You could also see it in a positive light: the higher processor speed allows more abstraction layers, which makes development easier and faster (if done right, of course).
ORMs and web frameworks make it much, much faster to develop a CRUD web app then the alternative of writing everything yourself. Of course, you pay with performance, but you gain in time to delivery, which in turn translates to more features in the same time.
I know it's a common complaint amongst a certain set to remember some bizarre version of the good old days when software was lean and mean and usable and did all the work our modern software does, but that time literally never existed. What you call bloat, most people either call "usability and features" or simply don't notice at all. The fact that you (apparently) don't like usability and features and prefer to call them bloat doesn't actually give any veracity to your opinions - certainly not enough to make such assertive unsupported statements.
That's not true. There are many websites today that have identical or less functionality than in the past, and they're just SLOW. So many sites I visit do an inane amount of work to load up a static site. And they scroll poorly, they feel laggy. There's no new functionality, except as far as the developer goes - they're now doing databinding on the client, loading content at runtime (vs sending back rendered HTML), etc.
Edit: I'd also add "on the web" continues to be an excuse for slow, unresponsive software. Even in ~96 or so, I remember folks getting excited. "Look at this online frog dissection thing!" ... It was crappier than what you could do with even a small download. But it was on the web so it was hot. Same thing now.
Websites are a great example since the bloat is clearly visible on the developer's end. I decided to try using Foundation last week. The last time I worked on a website I just generated some static HTML markup and filled it in with my own hand-built CSS. Nobody would give an award for that design, but it was functional enough.
The Foundation install instructions first ask for three dependencies: Ruby, node.js, and git. Subsequently, through gem and npm, dozens of other dependencies were installed. The framework pushes multiple .js dependencies onto the served pages, and heavily encourages using either Compass or libsass to generate the final CSS, making it weigh considerably more. It even had tooling to automatically rebuild as a background process.
All of this, and I basically just modified the example templates slightly to help me build a layout. In theory all sorts of other things could be done with that framework, but the nature of the software stack we have, at least in this domain, favors including the kitchen sink when you need a cup of water.
Don't start on node. A dozen thousand files of dependencies because every function needs its own module containing at least 6 files. And using RequireJS, well it takes 30s+ to build this site even though it's not doing anything earth shattering. And that's before running uglify or any such minimizer. I don't get it.
It could still be a case of one person's bloat being another's feature. Though for a website website, the other people could be advertisers, people who develop the site and people who publish content to it. There was a Mozilla report not too long ago that found that just disabling tracking (not ads) reduced page load times by almost half.
In many cases it's not. In fact I've recently worked on several projects where the frontend is stupidly heavy for zero reason. Just sloppy or over engineered code.
I know the full extent of the capabilities - one is just a corporate website with no interactivity. It's just dumb. The previous version was just simple static HTML; but as part of the "responsive design overhaul" it turned into this behemoth that makes several dozen requests to open the homepage. Nuts.
This is the exact opposite of the experience I've had with (most) software. A new CPU with a higher clock speed makes existing software faster, but most new software written for the new CPU will burn all of the extra CPU cycles on more layers of abstraction or poorly written code until it runs at the same speed that old code ran on the old CPU. I'm impressed that hardware designers and compiler authors can do their jobs well enough to make this sort of bloated software (e.g. multiple gigabytes for a word processor or image editor) succeed in spite of itself.
There are of course CPU advancements that make a huge performance difference when used properly (e.g. SSE, multiple cores in consumer machines) and some applications will use them to great effect, but these seem to be few and far between.