Hacker News new | past | comments | ask | show | jobs | submit login
The cloudy layers of modern-day programming (vickiboykis.com)
280 points by antirez on Dec 6, 2022 | hide | past | favorite | 165 comments



Like many other commenters (of a certain age?), I too have this unsatisfied feeling about a particular kind of modern software development. The kind where you never really dig down and design anything, you just plumb a bunch of stuff together with best practices you find on stack overflow.

Many commenters are attributing this problem to the modern high-level tools we now have access to. But I don't think this is the crux of the issue. You can face the same issue (you're plumbing things, not a designing a system) whether you are working with low level or high level components.

Heck, you could be working on a hardware circuit, but if the only thing you had to do was make sure the right wires, resistors, capacitors, etc. were in place between the chips you're still just doing plumbing work.

To me, one of the most satisfying things about programming is when you can build something great by starting with a concept for your lower-level primitives, your tools, and then work up through the higher levels of design, ultimately having the pieces you designed fit together to form something useful to the world.

This building-things-to-build-things idea is even satisfying in other areas. Just gluing a bunch of wood together to make a piece of furniture is fine, but building your own jigs and tools to be able to do the kind of cuts that enable the end design you envision is way more satisfying, and opens up the design space considerably.

If I had to lament anything (and perhaps this is what's most in alignment with the post) it's that most of the high-level primitives you touch these days tend to be sprawling, buggy, not focused, and just generally not of high quality or performance. It's possible for high-level primitives to avoid these pitfalls (e.g. Sqlite, the canonical example) but it does tend to be the exception.

I think there is still plenty of interesting and satisfying software engineering work to be done when starting with high-level libraries and tools. You just need to think about how to use their properties and guarantees (along with maybe some stuff you build yourself!) to enable the design of something more than just the (naively-plumbed) sum of the parts.


Challenge yourself to use stdlib only for a while, or the future.

It sounds unrealistic and I'm going to get flamed, but hear me out. It works. Most of my development these days is in reasonably complete languages like go, rust, zig, and various scripting languages, so your mileage may vary if you're writing in something like rune, hare or carbon that is still taking shape.

If you think I'm crazy but have a lingering skepticism, challenge yourself to spend one day, one week, or one month using the stdlib only. If that is unrealistic in your setting a compromise could be to only use libraries that were created by yourself. You wont come out of it as an enlightened samurai monk with a celebrity HN presence, but you will gain an immense sense of scrutiny that didn't exist before as you take a look at the libraries you want to use after that.

For those who think this is just 100% bull, consider how we survived before google, youtube, and stackexchange existed.


I'm not gonna flame you, but I will note that, as someone who gets paid to use my judgement to decide on the optimal trade-off between quality, time spent on the project, and its future maintainability... I feel like all three will suffer quite a bit with this self imposed "handicap".


This is the main crux of the issue IMO: feature output velocity. With the enforcement of sprint-scale development scope, you really don't have time to iterate on a wide-reaching and supportive base layer of software infrastructure so you reach for tools that will get you what you need within the timeframe demanded by whoever hired you.


As someone who's been fortunate enough to work on and lead these kinds of projects (and watch coworkers work on them) I've come to a near opposite conclusion which is that sprints reveal how bad the ROI on this work tends to be.

The way many of these projects go is someone very smart works with subject matter experts to map out the problem space. The smart person (or people) then go and begin building this set of primitives and integrating the product, adjusting as they go along accruing some warts.

After 6-12 months we have this beautiful tool that improves developer velocity, new features are easy to code; then disaster strikes. It turns out the map of the problem space was wrong! A bunch of things the team believed to be invariants aren't! Suddenly business needs are forcing developers to tear down walls in their beautiful abstraction castle until all that's left is a tangled maze no other developer has a hope of understanding.

Now the project is a millstone around the dev team's neck rather than velocity boost they'd hoped for.

The way these projects more often succeed is some senior engineer pastes together an abstraction layer sends it out into the world. It gets heavily abused for years until finally the team says "We know this sucks, and there's a lot of business value if we make it nice, let's invest a bunch of time in retrofitting this" and fight like hell to make the business case. IMO this tends to lead to better projects and value (though unfortunately many companies make the fight harder than it should be)


Fiction and non-fiction writers have been struggling with these issues long before software was a concept. Their solution: "writing is re-writing".

I didn't mean to imply a dichotomy. I don't think an intense planning phase up front solves the problem. But feature-sprints tend to crowd out refactor-sprints because of the demands from above for new features. Downtime working on backend stuff is not perceived by the customer as anything benefiting them. But here, the backend is a codebase no one enjoys working with and the customer is the C-suite expecting X features this month because X or X-1 features were pushed last month.


So what is the way (are optimal ways) from the beginning?


Ultimately, "It depends" and no answer will satisfy all cases.

My general advice is that software engineering has a lot of well worn patterns for problems, stick to those as much as possible. Their great advantage is that any experienced software engineer will recognize them and onboard quickly, allowing you to focus on those parts of the problem specific to your project/company.

In most cases, whatever common pattern you shoehorn your problem into will suffice for the purposes of the business. It will be ugly, and have warts, but will be generally maintainable and not often touched. Again, if this turns out to be wrong after it's been battle tested and shown value, you can begin to migrate away to something new.

There are exceptions to the above, and many companies don't actually go through the motions of following well-worn patterns even when they think they do e.g. many companies with public APIs make a common set of mistakes we've known how to solve for over a decade and have known they're easy to solve if you deal with them up front.


I am thinking very hard about the CAP theorem right now while working on a billing system for a cloud API right now and it is an absolute joy. No, it won't deploy in version 1, 2, or 3, but it might in version 4, and if it does, it will be glorious.

You can find cool technical problems anywhere as long as you are willing to take the path less traveled.


> You can find cool technical problems anywhere as long as you are willing to take the path less traveled.

After doing that a few times, I'm no longer sure if the reward of tackling cool problems to create more robust, better, faster components is worth the stress of missing deadlines.

Looking back on what value of better work materializes and what is, per YAGNI, usually wasted, I just had a thought: perhaps the right way is to take the easy/dumb way and focus all available time/effort to optimize it for performance - instead of abstraction and extensibility. Because in my experience, nobody ever extends the code the way you envisioned - if they do it at all, they do it by first refactoring it to suit their own idea. And, nobody ever goes back to fix performance. Therefore, making things abstract and extensible is mostly a wasted work - but making things fast pays back for as long as the code is in use.


I am doing this for a particular purpose, though, that no billing system I have seen has. I hate metered API billing, and I don't want to use it. In particular, I don't want a customer running up a $10,000 bill and calling me for a refund. It probably will cost me O($1,000) to give them a refund, between the processing fees and the lost compute, and I will be out that money. Most companies that would ask for a refund also won't pay the bill when it arrives (which is, I think, why AWS is so liberal with refunds).

Instead, I want to do credit-based billing: you buy credits, and when you get to 0 credits, you are cut off (with an auto-refill option for the "metered billing experience," but with strict spending limits). This is, in my opinion, a much better UX than metered billing. From a distributed systems perspective, it's isometric to "metered billing with a hard spending cap" which may be the ultimate version.

The problem with credit-based billing (and why nobody does it) is that if you have a service in multiple datacenters, you have to consistently update a database to make sure that you don't drop below 0 credits, and that is very slow. However, by fiddling with the CAP theorem the way CockroachDB/Spanner do, I think we can do credit-based billing and make it feel like the AP system that metered billing is, and behave like a CP system only when we absolutely need to.

Also, this is basically all enabled by the fact that AWS has a precise time service.

In theory, version 1 of the API will be only in one region, version 2 will have active/passive redundancy, and around version 3 or 4 I want to switch to active/active in several DCs to give low latency and high reliability.


> I hate metered API billing, and I don't want to use it.

> Instead, I want to do credit-based billing: you buy credits, and when you get to 0 credits, you are cut off (with an auto-refill option for the "metered billing experience," but with strict spending limits). This is, in my opinion, a much better UX than metered billing.

How do your customers feel about that? Have you researched if your customers are comfortable spending their money upfront to buy credits they might only use much later (if ever)? For small amounts of money that might not be a big deal, but if we're talking about thousands of dollars that might look different.


Going one step further, challenge yourself to _not_ use stdlib, just raw language constructs.

Need a hashtable? Write one.

You won't come out with a celebrity HN presence, but you may gain enlightened samurai monk status.


Going one step further, challenge yourself not to use raw language constructs, but dive deep into assembly.

Need a conditional? Time to jump around.

I wrote a web server that scales to over 1 million requests per second, and I found out it's much more maintainable, scalable and environmentally friendly for our company.


That sounds like it was a lot of fun but I'm wondering what assembly gave you that was not available in e.g. C? My understanding is that most compilers can out-optimise the average developer, so are you an above-average developer (well I guess you are) or did assembly enable you to do something that was difficult in a higher level language? Or was it more about the challenge (which I'm totally on board with by the way)?


> My understanding is that most compilers can out-optimise the average developer

Do you happen to know where (what source) you got that from? I'm genuinely curious, as to my knowledge, compilers are generally still easily fooled by things that causes them unable to vectorize code or factor out conditional jumps.

To give an example here, see Mike Actons' talk below (from about 43:10 onwards) in which he describes the compiler failures.

https://youtu.be/rX0ItVEVjHc?t=2590


No specific source I could point to really, just that I've been hanging around message boards like this for a long time. I didn't mean to say it's impossible to beat a compiler or they don't have any blind spots, but thanks for the link - sounds interesting :-)


> not to use raw language constructs

Nit: assembly is a language. The phrase you are looking for is probably "structured programming".


Touche


I don't think I you can compare the internet of the 90s to today's internet. The expectations and scale are no where similar.


Challenge yourself to not even use stdlib once in awhile. There are some interesting insights to glean about how much room for improvement we have even at the very bottom. https://youtu.be/BrBb0mqoIAc


I wrote entire systems with Turbo Pascal, and then Delphi out of the box. Many others did the same with Visual Basic 6, or the Microsoft Office 2000 suite with VBA, before the .NET infection took hold and Microsoft lost it's mind.

All before Google, Youtube and Stack Exchange


A side effect of this is that almost all job position advertisement are disgusting to look at. They are all about this kind of mindless glue code programming, but wrapped in marketing speak to make it look like "you get to use awesome bleeding edge latest technologies" when in reality it is "you have to figure out how to configure 10 different things to work together to sort of kind of produce the intended behavior".

In the last 3 years I think I never saw even once a job description on any popular job board where they advertise that you will do some actually interesting programming. The only ones I've seen have been on Twitter but from companies doing things in areas I have no experience in (e.g. game engine programming).


I suspect that this is why leetcode tests are so prevalent.

They basically test for distance from school, and not much else, as the algorithms aren’t really reflective of real-world work, which, as the article states, is really fairly simple “glue,” binding together prefab sections.

If someone is good at, and energized by, writing “from scratch,” and "learning the whole system," then they are actually not what you want. You want people that are good at rote, can learn fairly shallow APIs quickly, and are incurious as to why things work.


I have exactly the same problem.. got sucked into "the cloud" 4-5 years ago at my current employer. Now I desperately want to get another job. Something with preferably no or minimal cloud involved. The trouble is that the jobs that sound interesting don't reflect my expertise.. Now should I try to start from 0 with a junior salary? Does not make sense with a family. I don't really have an idea yet.. but I urgently need to change something because my current work is killing everything I ever felt for software development.


Try to find work in visual effects, lots of interesting work and they’re on prems. Anyone with software background can get in !


Right, but the hours are long and the pay is low. VFX is even worse than games from career perspective


That's not true in programming. I was 9-5 on 7 years there over multiple studios. My last studio (one of the most known ones) actually had 4d weeks.

It is definitely true for VFX Artists. And you will definitely get a pay cut compared to tech (50%-ish TC)


Or, just don't work in webapps. Get into embedded programming. Or join a games studio.

I have a friend who is writing code to run on a sort of exoskeleton meant to benefit disabled people and help them walk. He has never in his life "deployed to the cloud" and wouldn't have the foggiest idea of how to do it.


It’s only a matter of time before someone realizes how helpful a GPS transmitter would be in an exoskeleton for disabled people.

Now you have a cloud component.


You know that all GPS transmitters are in space, right? GPS is a unidirectional technology where GPS receivers don't (and can't) talk back to GPS satellites in any way.


If the caretaker wants to get the coordinates back, then you have to go through a server at some point. I think GP was thinking along the lines of something like Find My iPhone, where the GPS coordinates are sent to the cloud. You will need a mobile baseband radio alongside the GPS receiver.


Yeah, you are right of course. I was thinking of a GPS receiver with a mobile connection to send data to a central server, and it turned into ‘GPS transmitter’ :)


I wouldn’t call GPS the cloud


Yeah it's way beyond that


That sounds nice but how do you get into embedded if your experience are 5 years, 10 years or more in say distributed systems/cloud/web programming/etc?


Easy - just apply, and mention that you have experience, but also that you have a reference from this site. This site has the greatest minds in terms of embedded experience, and any company worth working for will instantly know to give your application a closer look.


Yes. Whenever I work on a "serverless" app, I spend more time messing around with IAC tools like Terraform than I do writing actual application code. It's sad.


>Heck, you could be working on a hardware circuit, but if the only thing you had to do was make sure the right wires, resistors, capacitors, etc. were in place between the chips you're still just doing plumbing work.

A lot of modern hardware design feels like that, take a microcontroller, some peripheral chips, connect them together, and copy the datasheets for whatever support passives they need.


I came to this same conclusion last week when I started writing my own webgpu renderer. I went into it with no knowledge of graphics and without using libraries. Having to create my own generic abstractions for pipelines, passes and buffers has been a massive creative and educational experience. I haven't felt this satisfaction from programming in years from my day job.


Go is mostly devoid of this. The general approach is all stdlib and the idle extra dependency. Of course, a lot of fields are inaccessible.

Lisp can enable fantastic development speed allowing you to build your own primitives. Racket's ecosystem's all high quality too.

I believe Julia has a less buggy ecosystem allowing you to pipe together natively written ML things, as opposed to Python's dumpster fire ecosystem


> I believe Julia has a less buggy ecosystem

Love the Julia language. But its ecosystem is definitely less complete, less documented and arguably more buggy than the Python equivalences.

This is not a failing on Julias part, the Python ecosystem is 50x bigger and backed by multiple of the largest IT/ML firms on the planet.


On your hardware tangent vs. cloudgunk...

People who are really serious about software should make their own hardware. - Alan Kay

... via https://github.com/globalcitizen/taoup


As some other commenters have said, this is really a self-inflicted problem. The author has chosen to do an EDA task which is very manageable on a laptop via a convoluted stack of cloud services - possibly just to illustrate a point. But even if this all worked smoothly, the fact that it is far removed from "software engineering" has more to do with the fact that it's a data analysis project. If it were about about writing firmware for audio hardware it would look very different.

Nitpick: ! for shell commands is an IPython feature, not Jupyter. It doesn't work with other kernels.


It's also well-known that pandas is memory-inefficient and Dask would probably do the memory estimation they described for them (after the author dismissed it). They're really just showing they don't really understand these tools that well

https://wesmckinney.com/blog/apache-arrow-pandas-internals/


Good article, great points. The Knuth quote on lack of creativity is right on the money. It's why I've been drifting away from hands-on programming even though I'd rather not. I still love programming as much as I ever did as a teenager. Late at night when everyone is asleep and I can work on my own code, it's as much joy as it ever was. What's different is that at $DAYJOB back in the 90s and 00s it used to be just as fun all day.

Nowadays when it's just gluing frameworks together and configuring AWS services... it doesn't really feel any different intellectually than cleaning toilets. Sure the pay is better but as a creative challenge they're pretty much on par.


> Nowadays when it's just gluing frameworks together and configuring AWS services... it doesn't really feel any different intellectually than cleaning toilets.

Comparing your six figure white collar job to basic janitorial work is pretty damn cringe and pretty objectively untrue.


> Comparing your six figure white collar job to basic janitorial work is pretty damn cringe and pretty objectively untrue.

I grew up doing hard farm/ranch labor outside in 95-105 degree TX heat and humidity. I agree with the ancestor comment that manual labor is a hell of a lot more satisfying and stimulating than gluing together AWS services with IAM/RAM snippets from stack overflow and updating some design doc about it. If it payed adequately I might do manual labor in the day and solve actually challenging and fulfilling technical problems at night. Programmers don't get to program much anymore :(


You're right. Cleaning toilets is at least a laborious task.

Your standard CRUD applications and web services are largely just a rigamarole of reciting the right incantation and duct taping bits together. It's immensely non-stimulating work when done properly.

This isn't an insult by any means. It's a testament to the triumphs of decades of engineering efforts to turn the process of orchestrating extremely complex electronic systems spanning continents into a largely trivial task for most projects.


> web services are largely just a rigamarole of reciting the right incantation and duct taping bits together

Not only this, but there'll always be an a**ole to say that we're doing that wrong, and add a few more steps in between just to make the process "better".


They're not saying the pay is the same. They're saying the intellectual aspect of the work is not terribly different.


OK, what if we say plumbing then? Same idea, and the pay is within an order of magnitude at the median.


Personally, I think it's pretty hard to make that kind of comparison accurately without having professional experience in both fields. One could also argue that it's as intellectually stimulating as being doctor, but how do we actually know that?

I've cleaned toilets professionally, and I'll say once and for all: writing software, no matter how monotonous or boring, is nothing like cleaning toilets. And I'd be willing to bet that someone trying to make such comparisons has ever had to do that kind of work.


Some developers even make more than plumbers.


> it doesn't really feel any different intellectually than cleaning toilets

There is a big difference. one is cleaning shit other more shit creating more shit :)


What scares me about ChatGPT is less so that I’ll lose my job (though it’s possible), but more so that I’ll be using language models to work at higher and higher levels of abstraction doing mainly configuration tweaking. Some of the particular pain points expressed in the article should be removed with AI in the loop development, but it’s another step away from “real programming”, which is what attracted most of us to this field. Yes, creating things is the end goal, but I’m terrified that I won’t be able to extract nearly as much joy when my job becomes largely prompting GPTn to swap out frameworks and UI paradigms for me like magic.


How much joy do we really get from day-to-day work if we are really honest? I am using ChatGPT to help me get things done (Node.js programming) for my startup. It's getting me closer to having this client project done so I will have more time for my own internal AI thing I am adding to the main service.

No one is stopping you from writing 6502 assembly code in your spare time. That's actually still a somewhat popular hobby.


Well, as a matter of fact my current job is embedded, so I admit I’m being a bit whingy.


This.

I already started using ChatGPT to solve problems because I can’t be arsed to read through ten vendors worth of documentation. It wrote me a fairly complete and accurate chunk of code the other day to solve a problem.


I've found it's often like pairing with a junior developer who is familiar with whatever problem you're describing and types insanely fast, and that's without learning too much how best to prompt it. A recent discovery was that you can ask it what problems may exist in its code, then ask it to fix them.


I have to admit that having it giving short summaries of framework docs and models felt like drugs in my veins.


what were some prompts you asked?


- how to filter tabs in a firefox extension (turns out some APIs are only accessible in background scripts). the fun part is that it gave me an obsolete use case, so I told him "it's wrong, firefox uses promises now", so he fixed itself and used the new api.

- someone about django custom inlines, the answer was mild but it integrated various aspects of the framework in a short answer which helped a lot (django is particularly horrendous, i cant suffer its strange style, so that played too)


Its already like this. My first job in the 90s I wrote our own linked list classes, a logging framework and a persistence layer. Now it feels like I write css and yaml all day.


You still get to do something as close to the metal as writing raw CSS? I'm reliably told that's cavalier and you should be writing something that compiles to CSS.


Think about that guy who wrote his own operating system as a hobby.


RIP Terry Davis <3

https://templeos.org/


I was actually thinking about Linux, but this applies too


+100 prayers


By the way, the HiSOFT that created DevPac 4 (assembler/debugger) for the ZX Spectrum is still around. According to their about page they now build websites?!? https://www.hisoft.co.uk/


You could switch to a job where you’re not writing css and yaml?


oh yea.

using a software framework like django for "rapid application development" gives me a feeling closer to writing configuration files rather than actually "programming" (where "programming" is writing hardcore algorithms).

but don't get me wrong, I liked doing that, I got paid to do it. But let's call it for what it is: that python code (django app) was really django framework config.

this is a simlar phenomenon but worse, at least I could look around all of the actual code of the program for which I was writing configuration as code.

then, the thing about hardcore algorithms is that they need to be written once, and then everybody can use them. this is a giant problem. as I think about this, such hardcore algorithms are digital artifacts, so they are subject to the same problems all other digital artifacts (media, videos) are. a problem also known as software piracy. but software has been about composing proven algorithms together since day 1. the problem by this point is socio-economic. not technical.

I see the whole debate around "who will pay for critical infrastructure software" as another instance of "how should artists make money in the digital era".

But this comment (which I'm editing) is already at 0 points. somebody doesn't like what I'm trying to say, but I knew this already.


> they need to be written once

Well, if you were writing abstract pseudo-code, then maybe. In practice, the same algorithms are often reimplemented multiple times.

> and then everybody can use them

ah, if only that were true... implementations are often not that flexible, nor are programmers that keen on utilizing existing implementations.

> this is a giant problem

Problem? To the extent it's true - it's a boon, not a problem. Image if whenever you made a chair - suddenly everyone could get a chair without taking your own chair or spending any time and resources on chair construction. It's a miracle!

> a problem also known as software piracy.

1. Piracy is when people on ships with guns rob other ships. Arrgh, matey!

2. Sharing and copying software or other media is a good thing, not a problem.

PS - I haven't downvoted you. Point taken about django "config-programming".


well sure, I completely agree that it's indeed a boon. a HUGE boon.

the problem I describe is not a technical one, but a social one. and it's only a problem due to the current way society works. It's only a problem for some people (e.g me); them who are well satisfied by this "status quo" don't see any issue beyond lacking enforcement of IP 'rights' and the need for better DRM, and copy protections, and other things like that.

you're focusing on the technical aspects (plus you seem to be deliberately using the wrong definition of piracy).

I'm talking about the social economic aspects: I'm saying that this boon brought about by digital technology is only benefiting a select few. I tend to think about this 'boon' as potential that we collectively seem to be choosing to forego; I think it's my life's mission to do everything I can to avoid this "foregoing" of the great potential unleashed by digital technology; I feel like I'm swimming against the current most of the time.


> I'm saying that this boon brought about by digital technology is only benefiting a select few.

If you mean how most people in the world are under stricter social control with the advent of technology than they were in primitive tribal society (or maybe even middle-age agrarian society), then maybe.

But if you mean the benefit from being able to make these copies - then definitely disagree: People get to have tons of free (libre & gratis) software, and lots of free (gratis) cultural products like images, audio, video and text. Are you bemoaning what we do with this glut, and perhaps the problem of over-abundance, shallowness of cultural taste etc.? Otherwise I'm still missing your point.

About piracy: I'm using the right definition, it's the copyright crooks who are using the wrong one :-P


well, I'm not considering software specifically, but any and all "digital" media; including and without any special focus on software.

I suppose software is the least affected by this alleged "problem". after all, the 'free and open culture' movement spun out[1] of the free (and open) software movements.

[1] citation needed.


I think you nailed it with configuration vs programming. modern software development is configuration. notice that "development" is not programming either. so this trend has been going for long time


Good development is 99% configuration.

The problem is, that the tools and frameworks we use have such bad configuration.


it's config until you need an option not in the 'language' and then you have to get real dirty


I think there was a survey, fairly recently, that found that a top language in source code was YAML.

That says something.


If you feel like "modern programming is boring configuration mess"

then ask yourself - what have you done in order to have interesting job, projects, challenges, etc?

I mean, if you decided at some point that $big_salary for throwing JSONs via REST from CRUD app

is what you want to do until you pay your loans (e.g decade), then it's fine, but don't be shocked that you aren't doing bleeding edge R&D at some fancy place.

Like what prevents you from putting effort for year? two? and switching from X to Y?


Capability trap. Not everyone is capable of effectively working 2 jobs at once, and generally speaking no matter how big that $big_salary is you aren't going to have more money or time.

Also there are lots of way for a company to say they are doing interesting/meaningful things or whatever and effectively not be. And no matter how great the work you may be doing, you don't want to be doing it if it doesn't pay enough.


So the only way is to complain whole life?


Most of the time life is "choose the least shitty option" and not "choose the best option".

That's just the world we live in.


Lets not reduce this article to "complaining", it didn't just say "modern software development sucks" and be done with it. The author gave an overview on what she thinks is wrong about the field, with a fairly detailed example and references. Even if you disagree with her take, these kinds of articles force us to reflect on the state of the industry. There is nothing wrong with that.

Should we be mindlessly churning away taking no issue with the state we're in?


tbf I've been mostly focused on HN's comment section's opinion


I mean, if you can’t, or don’t want to change your situation, it’s an effective outlet. There’s also more than enough people feeling the same to commiserate, so you’ll never be alone.


"leave the situation or accept it. All else is madness."


Every project I worked on was JS/Node hell. Then I decided to switch to Elixir/Erlang and now I’m much happier. Of course, there are uninteresting or badly engineered projects but overall it’s much better.


There's a true joy in working with some languages. It's like using a great pen or driving a well-made car. You get focus and flow.


F#. Wield the power of .NET with functional programming and beautiful syntax.


If you feel like "modern programming is boring configuration mess"

then ask yourself - what have you done in order to have interesting job, projects, challenges, etc?

These two things don't connect to each other at all


How so?


How do they not connect? How do they connect?


Yup - how do they not connect in your opinion?


It's not a yes or no question, I'm asking you to explain how these two separate things have anything to do with each other. The burden of proof is on you, you're the one who said it and you're asking me to prove a negative.


wut? what "proof" are you talking about, there's nothing to prove.

The connection is simple: If you don't like something, you're free to change it (in this case)

If you feel like your job is easy and you feel bored, then you're free to quit and apply to place which will challenge you.

If you expect that some magic will happen and one day you will come to your job and you'll be tasked with coding spaceship, then you're optimistic as hell.

So, the question stands - what did you do in order to improve your situation? if nothing, then do not expect magic to happen because that's unlikely.


> Like what prevents you from putting effort for year? two? and switching from X to Y?

I don't think it only takes 1 yr to switch to something interesting. Any R&D lab wants you have a phd with published papers.


The first step you can do is: actually start working with this on daily basis

Maybe over years you'll gain an expertise to work in research, idk.


for sure. beats complaining having to write crud.


> then ask yourself - what have you done in order to have interesting job, projects, challenges, etc?

This thousand times for all the whiners. Learn an esoteric skill, get hired for esoteric job.


Do architects reinvent the I-beam every time they design a new building? No, of course not.

The reason society works is because you can reuse abstractions that other people have already invented. It allows you to scale. Without economies of scale, you end up with Baumol's cost disease, which is extremely obvious in the US in industries like child and elderly care.

We don't want most software devs to be doing anything because gluing stuff together. If they weren't, we really screwed up somewhere.


I like this allegory because it puts the complexity into a visual mindset and brings to light an interesting question about the nature of our abstractions.. For example:

Are our abstractions I-beams or Pre-Fabs[0]?

We all know that the rise of pre-fabs is, at its heart, the story of cheap developments all lazily (and hastily) thrown together in arrangements that are of low quality, mid-to-low beauty and do not last for very long.

Skyscrapers of the early 20th century stand today (with I-Beams) and are considered by many to be beautiful, maintainable etc;

People want pre-fabs, since they're cheap, the economy will always be in the pre-fab.

But pre-fabs have a limited shelf-life, and reconstruction is more expensive than spending a little extra up front cost.

[0]: This is the sort of pre-fab I am talking about: https://en.wikipedia.org/wiki/Prefabricated_building#/media/...


I-beams are programming languages, libraries, and compilers.

Pre-fabs are massive frameworks where mixing them always looks hack-glued together. Same as if you glue two prefabs from different vendors.

That said, industrial plants aren't gothic cathedrals. They are a collection of buildings plopped together.


I think that's a pretty good categorisation that I could definitely agree with.

So I guess with that allegory you have to ask: Am I building a factory, an office, a home or a cathedral.

You can then plan your architecture accordingly.


Except the I-beam isn't only owned by a single company and is interoperable. (Unlike modern clouds and SaaS, which are neither of these things.)


I beams aren't abstractions, they are a well designed standard that can consistently meet certain expectations.


I beams are both an individual abstraction and part of the greater abstraction.

Your catalog of beams, bolts, brackets, weld patterns, rebar, piles, and concrete pour standards are all abstractions over extremely difficult subfields of materials engineering and structural engineering.

They exist so that your engineer can focus on building the structure using parts and resources with known, standardised behavior under the conditions the building will be put in.

Of course you'll see engineers break away from these abstractions when they need to for a given structure but those abstractions do exist and are commonly used so that a given structural engineer doesn't also need a PhD in materials engineering and countless other specialized fields.


An abstraction is something that lets you hide complexity. An I-beam is exactly what you see, there nothing hidden.


There absolutely is a ton hidden. Your ASTM A992 structural steel W-beam (wide I beam) or S-beam (standard I-beam) is abstracting a ton of details with regards to:

- the construction process.

- material composition.

- thicknesses of the flange.

- width of the flange.

- depth/height.

- max length.

- shear strength.

- tensile strength.

- elongation (stretch/sag when approaching the point of failure).

- corrosion resistance.

- tolerances.

- cost per foot.

And the list only goes on.

A structural engineer can say "I have a specific structure in mind and need a beam that can withstand XYZ conditions" and pick out the matching beam from a table without considering any of the details as to how one would actually achieve those properties and withstand those conditions.

Alternatively an engineer can know their budget, what grade of steel, and what size of beam they want to use ahead of time and simply design a structure using those abstracted pieces in coordination with the architect.

Beams and other construction elements are the structural engineer's equivalent of a Hardware Abstraction Layer (coming from the perspective of an engineer who works close to the metal when I can). It lets the engineer abstract most of the minutiae regarding working with the hardware itself by giving the engineer a largely uniform interface for choosing how their product interacts with the world. Generally the abstraction holds but sometimes it's leaky and details show through. Likewise sometimes the abstraction is insufficient and the engineer has to do some custom work that breaks away from the established path to meet the constraints.


Would you rather do some library gluing, or reinvent a thousand wheels with every project? The latter is neat the first few times. Whatever your preference, your value as an engineer is much higher if you can glue. Imagine if a carpentry workshop gives a carpenter a fully-fledged set of industrial power tools, but the carpenter insists she can recreate all the other tools with just her whittling knife because it's in the 'true spirit of carpentry'.


I'd rather re-invent some wheels. The problem with VendorOps as I see it is that quite often the vendored "solutions" aren't solutions: they do not meet the requirements! Yet … they sort of get like 20% of the way there, so they get adopted nonetheless, and the devs toil away on trying to get glue code to push it the remaining 80% of the way.

But if we had a system that we owned, then it could be adjusted to fit the requirements, elegantly. But we don't, so we can't.

The other problem is the "Ops" part: vendor owned systems are opaque AF, and when something goes wrong, impossible to debug. Then you become a support ticket monkey, praying you can convince the powers that be on the other end that a. it is truly their stuff that's broken, not yours and b. we pay for it, so yes, you should support it.

When a. or b. fails, then you end up writing yet more glue code to try to work around the bugs and outages that your vendor just doesn't give a shit about.


We got random failures on our API gateway to lambda connections, and the answer we got back from the support agent was something like “automatically retrying on failure is industry best practice”.

I just wanted to shout at them to fix their damn system, but of course we ended up implementing retries instead…


For a while AWS Glue had an issue where the running job counter would permanently increment. This was a problem if you only wanted one copy of a job running. The advice support gave us was to increase the allowed count by one. I saw references to this issue that were years old. I think it is fixed now because I haven't seen it happen in months.


> Would you rather do some library gluing, or reinvent a thousand wheels with every project?

Wheels. Absolutely wheels. Library gluing is what gets us garbage like Electron that needs to die in fire.

Imagine if a carpentry workshop gives a carpenter a bunch of IKEA kits and tries to conflate wanting to make furniture that isn't cost-cut prefabricated crap with insisting she can recreate all the other tools with just her whittling knife because it's in the 'true spirit of carpentry'. (Honestly, if anything, comparing Electron to IKEA is a insult to IKEA - there are cases where using IKEA is actually reasonable, they just aren't actually carpentry.)

(You use libraries when it makes sense to use libraries, just like you use 2x4s when it makes sense to use 2x4s. Sometimes you can make the whole thing out of 2x4s, just like you can make a whole program out of:

  tr -cs A-Za-z '\n' | tr A-Z a-z |
    sort | uniq -c | sort -rn | sed ${1}q
but if you're just gluing (screwing?) 2x4s together, you're going to get bad results when you need something that's not a 2x4.)


Ideally a healthy blend of both. Wheels where it relates to your companies core competences or there's a gap in the market. Glue for everything else (you don't need to invent an infrastructure provisioning solution unless you're and infrastructure provisioning companies--there's plenty of mature solutions). Other places like application libraries--it might make sense.


Wheels aren't licensed rather than owned and charged per revolution... yet.


Shit they stumbled onto my YC pitch deck


I work in CS education, and I've often wondered what this means for how we're preparing students. I don't teach frameworks or cloud services (and would be woefully under-qualified to do so), I teach the topics that have long been thought of as "foundational" for moving forward in computer science. The logic I've always clung to is that if students have a strong understanding of how to build things from scratch, they can apply that as they move towards more modern development tools. More and more I'm questioning the accuracy of that belief.

Perhaps we simply need to more clearly separate the goals of studying computer science from studying programming/development. For the time being, however, I'm left feeling like I may be doing students a disservice avoiding the reality of what "modern-day programming" has evolved into.


Sincere thanks for caring for your students!

In my experience, though, the set of {software development jobs that can be performed exclusively with the notions acquired in CS} is pretty much the same as the set of {boring software development}.

CS graduates tend to work on information systems, and we all know information systems are the epitome of boring software: https://thedailywtf.com/articles/programming-sucks!-or-at-le...

That's why R&D departments are full of people who learned a trade and later taught programming and software development themselves, and the bits with no business value are outsourced to consulting companies.


I agree that most of the routine software work feels like, umm, filling forms? The way I've made peace with it is to accept that the work that pays the bills is going to be boring. After considering ourselves to be special as software developers for many years, maybe we accept that it is not so special after all, like flipping burgers.

Two ways to out from the existential dread: one startup on a problem statement that feels exciting to you, so that you get to pick your tools, processes and what not. But this is not a realistic/affordable option for most people. Also, if your startup achieves even a moderate level of success, you'll be back to solving boring problems.

The other is to find a pockets of software development outside work, where you can still solve interesting problems, like developing small games. Joseph White, who created the famed programmable fantasy game console Pico-8 alludes to this (https://youtu.be/87jfTIWosBw?t=1080) ! It was an explicit goal to build a tool to solve cute, interesting problems because routine software development feels like glueing things together. The whole youtube talk linked above is worth watching. But the trick here is to forgo commercial motivations. People make fun, cute little games on Pico 8, distribute them as cartridges. Occasionally, there's a hit like Celeste. But otherwise its a small community of people solving problems and building things just for fun.


> like flipping burgers.

Programming an information system is like flipping burgers; developing industrial software is not.


I worked on what you could classify as industrial software. I was still flipping burgers. With a rusty spatula. The cloud people at least get slightly nicer spatulas.


One thing I had to learn to value my sanity here was to simply not argue about certain topics, such as software architecture. It was like there was this wall of people who emerge from the shadows anytime I argue for better architecture, because it wasn’t directly responsible for Important Enterprise Business Features.

Now I realize there’s probably more than one flavor of engineer:

1. software eng: ships features, prefers to use ready-made abstractions

2. infrastructure-ish eng: ships things that enable shipping of features, prefers to invent abstractions

The things they value are very different! I suspect HN tends more toward SW engineers (because statistics) with some stuff for infrastructure engineers.


I think in any moderately large business you'll have both.

1. Is usually implementing business logic for the product 2. Is generally working on things #1 uses to work productively

I'm not sure about the composition on HN, but I see a lot of #2 in "DevOps"/SRE roles. Usually these roles are skipped for small startups but at some point they get big enough they need dedicated people to take them on


That's not my experience with sre and DevOps, often it's just consuming tool written by others. It's basically being a sysadmin with a git repo


#2 is distinct from pure DevOps/SRE. Those roles have co-opted the infrastructure meaning to be more along the lines of "making sure things don't catch fire at 3am." Which is valuable, but, even farther removed from programming.

Still, they value the same things that the "pure infrastructure" people I were referring to do simply because they end up being on the hook for how it behaves.


It's not far from the top-down vs bottom-up too. Having a wide enough infra helps adapting at the upper layer faster and cleaner.


Very true. So few people talk about this topic is saddens me. I have had success switching mindsets when I get stuck on one.


It's a strong skill to know how to balance this.


I don't really understand the problem, or maybe it's more, I don't think this problem exists as an existential one for more than the set of people for whom it's an existential probem.

Programming and software engineering and their subdomains and superdomains are no longer a priesthood or an academic exercise or playground for Levy-style oldskool hackers, or rather, no longer remotely those things exclusively.

Most of what most people who fit in the supercategory do is work. The specifics of the work vary. If they aren't the ones that give you joy, change categories is a good and readily available solution.

If you want to hack on beautiful code, there are 1000x more ways to do so today spanning the purely aesthetic defined as many ways as you like (live coding, avocational language design, exercises in new tricks for old tools like the demoscene and emulator worlds), to the aggressively useful (small tools made by small teams of solo devs upon which empires balance).

You don't need to be a mid-level IC at MassiveCorp unless you can't give up access to the mana-tap of massive scaling. In which case maybe that IS your thing and the complaint is flat.

The joy hasn't left. Maybe it's just harder for the author to find? Or the problem is simply that no matter what you do for work, it's work, "chase your bliss" being of course a lie, because work IS work, and dissatisfaction like water will always find its level.


It sounds about right. At the end of the day, most developers work to produce features. Features and business domains are far less likely to be exciting than technical problems, but the business domain is way more important to adding value. Both need to be kept in check, but I'm sorry to say, technical achievement will not put food on the table or revenue on the balance sheet. At the end of the day, you almost always need to actually sell what you're doing (in one form or another).


It's very unclear to me why the writer ran everything through GCP and Colab. Having everything on a single environment with near-100% uptime is certainly still possible by renting a VPS, which is usually a much cheaper alternative anyways.

There's also many other alternatives that could be used if memory efficiency is the goal, e.g. polars/data.table instead of pandas, an OLAP database instead of bigquery etc. While I agree that pasting together configs is tedious and annoying, a lot of this type of work is due to developers themselves trying to replicate new trends at $bigco$ instead of optimizing for their own needs.


I agree with the sentiment in the article, but I also believe this kind of development is inevitable. How else are you going to to create increasingly complex high-level software if not by gluing together pieces that have already been written? Are you going to create your data layer from scratch every time?

I just don't see how you can arrive at today's productivity without running into this issue. Sure, it's not fun. Just like the author, it makes me miserable too. I have much more fun writing a parser or database from scratch as a side project. But it looks like you have to pick one: Do you want to be productive and get stuff done in time, or do you want to have fun and make art?

It's not a problem. It's an inevitable tradeoff.


> How else are you going to to create increasingly complex high-level software if not by gluing together pieces that have already been written? Are you going to create your data layer from scratch every time?

No, but I understand why you’d think that. Both the author and commenters make strange conclusions about the situation.

The problem is much easier to phrase than that. We don’t have good building blocks and tools. What does good mean? It means reliable and performant implementations, predictable behavior, well designed API surfaces (does one thing well), ability to debug and inspect, among other things.

The basic tools are already like this. Standard libraries, compilers, unix tools, file systems, certain battle tested databases. They’re all open source & move very slowly.

However, we also have countless proprietary cloud services, dependency hell in languages like JS and Rust, services scattered over different networks, regions and vendors.

So.. how did we end up here? I think it’s a combination of different reasons:

- More data both in total and per time unit. Much more people online. The world also developed towards ad tech, ML and video which demands a lot more than text and images.

- Horizontal scaling is the most cost-effective (due to physics) which prompts orders of magnitude more complex distributed systems.

- Cloud is the only option for many and gets pioneered by massive rent-seeking cynical corporations, so we lost both FOSS and any serious opportunities for standardization and simplicity. The IBM years are back.

- Money is so heavily involved, so ad-tech giants slurp up the talent and use them for market dominance, tech contributions become a side effect.

- Package managers and GitHub makes distribution easy, but results in a ginormous amount of dependencies and vendors.

So how do we fix it? We need to change culture and attitudes. A couple of tricks: Choose your deps carefully. Prefer FOSS, check issues – are these good authors? Is the API surface good? Well documented? Only use what you need, don’t let trendy blog posts FOMO you into using anything. You probably don’t need click metrics, A/B testing frameworks, minification, uglification, code splitting. You probably don’t need microservices either, or 99.999% uptime. Chances are you don’t even need multiple machines, or more than basic monitoring.


I've been comparing much of modern-day programming to assembling a kit from a knockoff Lego brand. It seems like it's easy, but then there's a lot of cursing and annoyances as the pieces don't quite fit together.


Knockoff Lego quality is getting very very good these days.


I suspect this effect is a big part of why Rust keeps winning the most loved language polls - because it has some of that old school programming vibe that got lost somewhere along the way


Just give it enough time before incentives drive a language with large developer adoption to encourage adoption and wide-spread appeal vs. a niche language that has little value proposition to a company beyond an excuse to keep some key developers that are bored happy doing their side-projects.


Although I sympathise with the sentiment, that's just half of the story. Today, I can deploy a high available cluster in minutes using any cloud technology + kubernetes and my favourite web framework. True, I need to learn half dozen of frameworks, and _it feels_ like I'm not working in the core problem, but had I tried to implement an equivalent system without those tools would have taken me months, and I would end with an ad-hoc half tested non-reusable mess.


> Today, I can deploy a high available cluster in minutes using any cloud technology + kubernetes and my favourite web framework.

I think the point is (at least for me it is), when you do that, have you done anything new that few or nobody else has done before?

No, it's just a well worn path which thousands of organizations have done already.


We blaze through those well-worn paths so that we can quickly get back to the interesting things. I orchestrate clusters at a high level and still get to solve difficult, stimulating problems every day.

The world of software development is immense. You can treat it like a job, or like a creative path, but to synergize both requires a lot of effort. Complaining that there aren't interesting programming jobs comes across as uninspired. Go look for them!


> We blaze through those well-worn paths so that we can quickly get back to the interesting things.

I could sign up for that! But what interesting things? That cluster will in nearly all companies just be used for yet another CRUD API supporting yet another social or adware platform. Yawn, please no.

If you are aware of companies doing actual intellectually creative programming anymore, please share them. I have no doubt they exist, but they are becoming way too rare.


In the past 1.5 years, either as part of a team or by myself, I have worked on...

- a multiplayer web VR platform with hundreds of challenging problems across the entire stack

- a AI-powered information engine and ontological system with layered insights

- a multi-platform realtime communications research engine

- a web3 Patreon clone & a web3 universal messaging system

- a decentralized identity management application

- a decentralized, self-managed, open source YouTube/TikTok clone

- three game development competitions

- a novel computer vision research project for a sports company

- a graph-based end-user programmable productivity suite

- Multiple prototypes for new social media platforms

- a collection of web-based precision tools like an archiver, URL shortener, etc.

- an AI-generated blog

- an AI-generated wiki & knowledge base

- a few more things I'd have to think about

I love what I do, I love waking up to code, I love going to sleep thinking about architectural problems. I love spending all day in a deep state of problem solving.

If you have a specific kind of creative programming in mind, I can help you look around for something more fulfilling.


This has been something that's been bothering me more and more over the years. I often feel like I'm just standing on the shoulders of giants that came before me. Occasionally, I do still get that feeling of satisfaction when I've built something that's *mostly" mine.

But a lot of times, I'm just connecting black boxes and slapping a coat of paint on top (aka our branding). And it leaves me with this sort of guilty feeling that I'm not actually doing anything valuable.

Ultimately I still do it because the money is great and overall it's a very privileged position to have in today's world.

But I think this is where having a hobby project comes into play. I can do things "my way" and not worry about needing to use the most efficient or industry standard components.


Though I too get a little frustrated with the lack of doing deep programming sometimes, I'm extremely grateful that I don't have to manage a machine with a database. You know what sounds really boring, at least to me? Making sure the machine has power. And cooling, and proper backups, etc.

I am so, so grateful that someone else does all of that stuff, and I can pay cheap cheap rates for some space on that machine, and a few CPU cycles on another one, etc.


I don't think the industry was supposed to create this magic environment where everybody gets to be a hacker and have good, artistic fun programming. Unfortunately it seems like integrating with as many ready-made solutions as possible is a pragmatic business decision. It doesn't come free, obviously, since we give up a lot of control to vendors and sometimes judging about system performance is hard because of how many things outside of our control are there, and programmers don't get enjoy themselves at all, but such are the economics of this market.

I hate it. But I also don't know if we can do anything about it. Like, for example the company I work at heavily uses AWS Lambda to build mobile app backends and, as a backend developer there, I feel very uneasy about how little I understand and control the system running our critical business logic. But it works well enough and the money savings are amazing when compared to using dedicated hardware, so I can't really make a good point why we shouldn't be doing it.

Sometimes I wonder if other professions get to have fun. Maybe I should explore design to scratch that artistic itch somehow.


3-4 day work weeks and a strong social safety net.

Let capitalism be capitalism and give people the energy and time to do things without having to justify every little thing to some pencil pusher to try and make anything happen at all.


Great critique! Reminds me a bit of

Back to the '70s with Serverless (2020) http://evrl.com/devops/cloud/2020/12/18/serverless.html (2020)

which I linked from

Summer Blog Backlog: Distributed Systems https://www.oilshell.org/blog/2021/07/blog-backlog-2.html (Kubernetes is our generation's Multics)

I'd say the key issue is that the cloud "abstractions" are leaky, and they don't compose. This leads to combinatorial explosions of code and frameworks.


I've had to embed a distributed cache into our product, which is a server customers run themselves... I got it working pretty quickly, then spent lots of time testing everything on Docker containers to make sure the distributed part of it was working.

Finally, I started testing it on Kubernetes because a lot of customers seem to use that in production. That forced me to work on the following things (after I had to learn details of how the cache works and how to configure it correctly, use its API, design a good integration for it into our product etc.):

- Kubernetes basics, like what a pod, service, deployment etc. is.

- Helm charts for configuring everything (lots of YAML).

- Terraform to kick off the infrastructure in the "cloud".

- Minikube to run things locally.

- Using Docker K8s as Minikube didn't work well on my Mac M1 for some "reason".

- Installing Lens, a UI to make sense of the hundreds of pieces I was now juggling.

- Configuring the cloud environment with the right permissions for devs to be able to use it.

- Network configuration so nodes can "talk" to each other, without exposing things on the Internet for no reason.

And probably a few dozen more little things that ate up all the time I had. Designing and writing the code to make things work locally and in a Docker container (running Docker compose) was pretty easy and took me a couple of weeks, including a lot of testing.... but once I started with the k8s, cloud stuff... I've been on it for a few months now :( and things are still not working reliably in "certain cases" (everything that changes seems to break something). It's a real nightmare and took all the fun out of the project for me. Really makes "programming" something else entirely that I would be very happy to leave to a good sysadmin to handle.


Quote: "Although it’s something I’ve only thought about recently, top computer scientists have been recognizing this as a problem for at least the past ten years"

10 years? Try since the dawn of computing, this was always a problem. Wanna know the most famous one, older than 50 years? Here it is:

In 1969, when Apollo mission on moon was live broadcasted by all the TV networks around the world, the image is of a grainy white and black, despite the fact that both NASA and TV stations already had color transmissions. Why? Because of the incompatibility of NASA transmission (which was a format more closer to our current HD - btw, can you imagine having HD format being common in 1969?) with NTSC/PAL systems at the time.

And what was the "API" that bridged (or in terms of the article "VendorOps") married these 2 incompatible formats? Use a projector on a wall and all TV stations filmed that wall instead of getting the real transmission. What a shame. Also NASA handling of the role films is abysmal, to say the least (https://www.nasa.gov/feature/not-unsolved-mysteries-the-lost...)


Summary:

The cloud takes inefficiency to a whole new level.

Cloud vendor tools are designed to sell cloud services, not solve developer problems.

Jupyter notebooks are buggy and unnecessary (You can execute Python without a browser) and won’t work with medium sized data sets.

Excel handles larger datasets than pandas and small datasets more efficiently.

You could parse the 2gb of data from goodreads easily on a 486 in a raw text file about as fast, but fetching the data over a 14400kbps modem would take all day.


Not every pilot gets to be a fighter pilot. We have more software jobs than ever but a lot of them are about gluing things together. I think this is more about perceptions and expectations than actual approaches or state of the industry.

Not that there isn’t a need for optimization and replacing glued pieces with bespoke pieces. But just don’t do it prematurely because it’s more exciting. It’s regularly not advantageous.


I think this problem is almost entirely self-inflicted (by either a person or an organization, sorry to people in organizations that do this and who recognize the insanity and are powerless to stop it).

BigQuery and Colab are fantastic tools largely because they allow less technical folks to try things out with very little friction and BigQuery scales to ridiculous amounts of data making formerly impossible queries possible. The example fits neither of these as the author is quite apparently very technical and the data is minuscule.

The author could have applied some economics thinking to this and correctly concluded that they could have run this exploration on a laptop using a regular old file and Jupyter (or just normal Python if they want to avoid the IPython infrastructure). Post-exploration they can run their application on a properly specced VM. IDK if BERT requires a GPU these days or if it can run well enough on a CPU, but either way, these resources are easily available on GCP without all the vendor-ops the author talks about.

To cap things off, the author says at the end that "in no previous universe would I be able to try out Stable Diffusion". Stable Diffusion requires 6 GB of GPU RAM and ~10 GB of storage. That was modest 5 years ago for commodity hardware.

I am not a cloud-decrier by any means, I love the cloud, I use it every day, but most of what I use is basic storage (buckets) and VMs. With just a little bit of abstraction it's easy to make something that runs in the cloud run locally just as easily. That also makes it easier to switch clouds if you find a better deal somewhere else.

TL;DR: use the right tools for the job


BERT is kind of old hat though isn't it in a way? I suspect if he builds this it will be overwhelmed by a similar system in a few months when GPT4 or LaMBDA 2 or whatever comes out, have that digest pirated novels directly and have some fix for identifying/preventing made-up information.

I wonder actually, how far could you get today with ChatGPT for recommendations, or with character.ai to actually just generate the entire customized book chapters for you while you watch.


I am a fullstack web dev, for about 8 years now.

These days I am learning Arduino and thinking of learning AVR programming.

Is there a career path for this kind of intersection?


Absolutely! But pay may vary because of "market reasons" (whatever that may mean).

Some suggestions for your study;

1) Use C/C++ only and nothing else. See my past posts for books full of example code. This will allow you to carry your knowledge across MCU families.

2) Starting with Arduino and moving on to direct AVR programming is a great approach. Get Elliot Williams' book Make: AVR Programming for the latter.

3) Arduino programming is deceptively easy but extremely powerful if mastered properly. Do your sample programs first on Arduino and then redo it using straight AVR-C. This will teach you how to prototype something quickly using Arduino and then when you have strict cycle/memory/power/latency requirements how to drop down to the metal and program exactly what is needed.

Finally from the career pov, you will become one of the few who can do and understand everything from bare metal all the way to processing in the "Cloud". As an example; read sensor data using Arduino, send it over the Internet to the "Cloud", Apply ML algorithms and do "Predictive Analytics". This is Industry 4.0/IIOT etc.


Thank you for the response. I just got the Make: AVR Programming. Seems very interesting stuffs!


Embedded. Generally it pays less however AFAIK.


I get the complaint kind of. But this articles example doesn't really feel right to me. I'd like to see the author pull off this project on his own 10 years ago. Not sure it's even possible. He'd be spending years writing some of this from scratch. Instead he finished it, figured it out with time left to write a blog post about it.


This seems to be touching more on the complexity of productionizing a system than cloud. I think you still run into a lot of these same problems getting your code to run somewhere else--especially when the management and operation of the system is split between many people/teams.


Am I the only one that was wondering about OP’s table of data manipulation technologies?

I’d be inclined to process everything locally right up until the 1TB mark, and that’s only because my computer doesn’t have enough disk space.


> Goodreads , or rather parent company Amazon, has decided to deprecate theirs.

That’s really the money quote, right there.

When we depend on the work of others, we also depend on the people and organizations behind that work. You really can’t get much bigger and more solid than Amazon (or Google, who are also infamous for drowning their babies), so “bus factor” is really a meaningless metric, when it comes to whether or not to depend on a service. I've been experiencing rug-pulls for most of my career (I'm an Apple developer, and Apple does this frequently).

I recently had to stop working on the app that I’m developing, and spend a month and a half, writing a new backend server and SDK, because the developer of a server I had depended on, has ghosted our project. I understand why they did it (life happens), but the reasons make no difference. They still ended up leaving us in the lurch. I actually have a huge amount of respect for the author, count them as a personal friend, and sincerely wish them nothing but the best.

That project ticked a lot of Buzzword Bingo boxes: Postgres, postgis, containers, Django, Python, etc. Also, the main author is fairly young.

But it was painfully slow. It was killing our app. It’s entirely possible that we could have sped it up, but it required working closely with the author for a while, and that relationship withered, as noted.

So I wrote my own server, using “boring” tech; PHP, MySQL, etc. It’s lickety-split fast, secure, simple, well-documented, maintainable, adaptable to multiple data inputs (the original server was for a single data source), and doesn’t require a container to deploy. It’s also fairly easy to switch the DB tech, and it can be deployed on a wide range of low-cost servers, which helps, as it’s a free app, for a nonprofit.

I’m not against dependencies, despite frequent sneers and insults, assuming that I’m some “out of touch boomer,” clutching the past. I think that modular software development (a very old technique, by the way) is how we can do high Quality, at scale, and rapidly. In fact, that’s exactly how I work.

It’s just that I tend to write my own dependencies. I have a fair bit of faith in myself.


Loved this article! Yes, I think this accurately described 90% of my consulting engagements!


Young woman yells at cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: