Hacker News new | past | comments | ask | show | jobs | submit login
90% of software engineering done today is integrating poorly documented APIs (twitter.com/austen)
454 points by tosh on May 1, 2022 | hide | past | favorite | 280 comments



90% of my software engineering is fighting with incomprehensible framework magic to automates simple tasks that don't need to be automated, to "save me" from just writing some simple code. It's literally rube goldberg all the way, the most over complex contraptions to achieve the simplest of tasks.

"You don't want to write "raw" html/js/sql/Java" Uhm I thought that was my job.


> "You don't want to write "raw" html/js/sql/Java" Uhm I thought that was my job.

It's not your job, any more than a firefighter's job is to be a hose operator, or a nurse's job is to be a syringe operator.

Your job is to produce software that meets the requirements of the business, as a member of a larger engineering organization which needs to be able to function well enough to deliver that software.

Why are frameworks popular? Because they provide a standardized way for multiple engineers, and multiple engineering teams, to collaborate effectively on the same product -- and they make it easy to hire people who already know the frameworks.

If everyone is writing "raw" JS/HTML/SQL/Java/whatever, they are actually building their own individual, unstandardized frameworks.

That's not good for supporting software when the author leaves, and it raises the barrier enormously for onboarding new people. Above a certain scale it's just not workable.

You can be cynical about this and say that it reduces engineers to interchangeable cogs in a machine. Or, you can understand why things are the way they are, understand how to succeed in this reality, and build a fulfilling career path for yourself by providing value that a mere cog cannot. I've worked with a lot of successful, happy engineers who are not cogs, and what separates them from the others is not coding skill -- it's that they all understand this reality.


> If everyone is writing "raw" JS/HTML/SQL/Java/whatever, they are actually building their own individual, unstandardized frameworks.

That is only "technically" true, in the way that if you develop any kind of sufficiently complex program then you will have, by necessity, built something which could be called a framework.

Realistically, though, your point is invalid and misses the mark.

To say that programmers shouldn't write pure code without an existing framework is akin to saying that a house builder shouldn't use bricks and lumber to build a house; they should buy a pre-fab kit instead. Which is, of course, nonsense unless you want the shittiest quality most generic house possible.


> then you will have, by necessity, built something which could be called a framework.

What they call "building your own framework" is simply what I think of as programming. You design a program, it will have a structure. Why is that "forbidden" and something that programmers are thought of as unable to do?


Because many (not all) devs use frameworks to cover for their own weaknesses in software design. There’s something deeply comforting in the faith that a few Very Smart Framework Authors have laid a gold-plated path of success for them to follow.

What they overlook is the time and energy wasted in the framework cottage industry: making dependencies talk to one another, aligning version numbers, major version upgrades being forced on them (or risk being insecure) and the time spent searching Google, SO, and GitHub issues for support.


This really doesn't ring true to me. The biggest value of frameworks in my mind is that I don't have to waste time doing software design for things where my solution has no reason to be unique. Directory structure, how I interface with the database, how I write to logs, how I route HTTP requests to endpoints in my code are all certainly things I could solve, but I'd rather spend time on things that are unique to the problem I'm solving, and accepting a perfectly suitable standard for everything else.

As a bonus, I know that every dev I hire with experience with that framework is also going to know how these things are set up, and they can focus their onboarding time on what actually needs to be unique to the problem we're solving.

Regarding the downsides:

- a framework doesn't necessitate additional libraries any more than rolling your own does (arguably this is worse without a framework, because unless you don't use libraries altogether you have a hodgepodge of libraries to solve common problems that don't have a lot of incentive to be cross-compatible, and a framework really can't be incompatible with itself)

- Any half-way popular framework is going to ship security updates for past major versions (and again, if you're using libraries this is more of a headache than updating a single framework)


Agree. I've never seen a poor dev reach for a framework to solve a problem. Poor devs do nothing for days and then ask a bad question at a stand up.


I would much rather waste my time on a well documented industry framework that I can use when I leave than learning a framework written by the architect astronaut because he thought his problem was a special snowflake.


> because he thought his problem was a special snowflake

I think this is the heart of the issue. Of course everyone thinks their problem is a special snowflake, otherwise they wouldn't invest resources into solving it. The job of us developers is to distill the essence of it and generalize the rest. But it's something I've always struggled to communicate with the business side. Either I "just don't get it", or they take generalization to mean I want to drown them in configuration.


Because someone else is going to come along behind you and need to maintain what you wrote. If you're using a standard, just about anybody can sit down and pick up where you left off within an hour or two


> If you're using a standard, just about anybody can sit down and pick up where you left off within an hour or two

Yeah it just doesn't work that way, you need to understand the domain and understand the system, and it takes a while to learn in a new job. This is just a fact, for every job, also software engineering. Stop this weird fantasy of replaceable cogs on the assembly line, you are not henry ford.


This may have been true at one point, however new frameworks are the "standard" every few years, and the pool for anything > a junior dev depletes quickly.

Documentation or (preferably and) built in language convention are the winners here as frameworks come and go.


Because if you move from one project to another you want to see a familiar implementation of Request class or ORM from a framework rather than reverse engineer custom-built undocumented untested replacement for this.

It may make sense if custom framework is a magintude order better, but if it is the same as a standard framework or worse then it is not worth to spend time on it.


Because your framework eventually becomes a badly written, not as widely tested version of a known framework. When you leave, it will be a lot easier to hire someone who knows React than someone who knows jseban.js.

I’ve had to come in before and deal with someone’s custom ORM or logging framework.

I’ve made the mistake myself of writing something that was a cross cutting concern and then later found out that there was a popular third party package. I replaced my implementation as soon as feasible.


> Because your framework eventually becomes a badly written, not as widely tested version of a known framework. When you leave, it will be a lot easier to hire someone who knows React than someone who knows jseban.js.

Why are you assuming that everything a software engineer creates will always be bad? That's an unbelievably deprecating attitude. Do you ever see any other professions do this? UX? Product? Writers? "everything we always try to do will always be bad" and we have to try to safe us from ourselves? What?

And this "framework" would be custom made to solve exactly the use cases at hand, and nothing else by definition would be better at those specific use cases.

And why would you optimise so much for onboarding and hiring anyway, at the expense of so much continuous wasted time and less output?

When my product owner quit, the new guy was working in parallel for three months, because "of course, they have to learn the ropes". That's normal for every single job basically, except software engineering.

Why not just spend a couple of days, or a week, to give an introduction to your new hires, so that they understand and learn how you have structured your program?

The only "problem" with that, is that you have to stop acting out this farce of the computer genius with the superior intelligence, in your hiring and onboarding process.


>Why are you assuming that everything a software engineer creates will always be bad?

Not necessarily, but it usually ends up badly. I have no doubts most of my peers could write a fine custom framework, but it's going to take a whole team of them at least a year to come up with something decently ergonomic, tested and documented. How often is the business side of the company going to be on board with spending that amount of resources on a solved problem?

Instead developers decide to fly under the radar and do it anyway, but then run out of steam while the framework is still an untested, undocumented prototype and the pressure to shift focus to features becomes too great to resist. And by that they have doomed the whole company to a bottomless swamp of technical debt, with seniors getting fed up and leaving, and being replaced with juniors with no clue of what's going on, making the onboarding problem magnitudes greater than it was in the first place. And if someone has the genius idea of a big rewrite, we all know where that's going, especially since the only people they have been able to retain the whole time are either inexperienced juniors and mediocre seniors, so the cycle repeats again.


> I have no doubts most of my peers could write a fine custom framework, but it's going to take a whole team of them at least a year to come up with something decently ergonomic, tested and documented.

No, because they would not make a framework, it would not require all of this work, because it's not designed to be generic and reused by others.

Simply write a program. Not a framework. Has this idea been completely lost? Solve your own use cases only. Don't try to generalise it and optimise it for reuse by someone else.


An abstraction doesn't care what you call it, or how many times it gets used. It still needs to be carefully designed, tested and documented, and this almost never happens in industry.


It’s not about the person being a “genius”. A popular framework has been well tested, written by a group of people, it’s well documented, a community has grown up around it, their are books, training videos, etc.

Would you write your own operating system or programming language because it might be better suited to solve your particular use case or use a general purpose language?


> What they call "building your own framework" is simply what I think of as programming.

If you want an analogy, this is like a builder claiming that he is going to make all bricks and nails himself. You don't usually write your own standard library, why would you write your own framework?


I think there's a very prevalent and very real difference between "building your own framework" and "building your own bricks and nails". Programming isn't exactly equivalent to a construction worker. Rather your team are the construction worker, the structural engineer, and the architect all at once.

The framework isn't bricks and nails. It's a prefab house plan. You follow the instructions and you get a house out on the other end. If you need to go outside what the prefab covers, you have to fight the framework because now you are drawing over top of an already printed out structural plan to try and do something different.

The equivalent of bricks and nails in SWE is using libraries. Use a library to do a thing. You still need to design the program (drawing the floor plan), formalize how the parts interact (drawing the structural plans), and actually put the pieces together (building the actual house) but you don't need to worry about the small details like how to perform X algorithm (make the bricks), write a messaging standard (make the nails), or handle the intricacies of concurrency, distributed data structures, or drawing to a user interface (making the concrete and mortar). Your libraries do that for you. All you need to do as a developer is focus on writing neat functional kernels that do "the hard stuff", wrapping all your "hard stuff" and libraries together in some OO class to implement a certain functionality, and then stringing the bits together with some imperative glue to tell all the pieces to work together. Past that is orchestration & administration and that's Ops' responsibility.

Unless all you need is cookie cutter, don't use a cookie cutter. Fighting the cookie cutter will be far harder than using the tools for the job(cleanly separated, well documented, and interoperable libraries) to solve specific problems.


Except it's not like that, software is not standardised, it's custom made.

And software engineers already have their toolset. The real analogy would be telling all the builders that they now have to use those shitty multi tools that are 10 screwdrivers and a hammer all in one.


For 90% of SWEs (and closer to 99% of anyone doing front-end web stuff) it really isn't that custom. You're not solving problems that are difficult except those which your team creates for themselves by making bad choices as they service the fairly routine and unexciting feature requests that come in from product. And that's not to say that is a bad thing, all teams make bad choices and have to improve them over time, even when they are using highly opinionated frameworks. It's an unavoidable part of software development, even incredible teams and devs need to refactor.

The point of using frameworks is that you can be sure that many of the bad choices the people who wrote it made early on have by now been tidied up. You don't have to worry that Dougie the intern's custom HTTP router is actually all sorts of ballsed when you go to set up your first DELETE endpoint, because Dougie didn't get to touch anything that fundamental to the stack.

I do get off the boat when it comes to leftpad and stuff like that, don't ever pull external dependencies for trivial things. But an ORM or a UI framework? Those problems have been solved hundreds of times, absolutely no need for anyone to solve them again unless you're that good that thousands will move to using your solution instead of the existing ones.


The analogy is now officially making the argument less clear


you are limiting yourself to typing out code, rather than what is amazing about a human brain, the ability to see coherence in random shit and improve it.

If you feel your job is to find neat ways of writing Java or C# or wtfever, you are limiting yourself. You should be looking to offload any repetitive/tedious task to automation, and from there figuring out how to leverage ML to make it efficient.

The "work" you speak of is going to be obsoleted in the next decade, if it isn't cutting edge CS theory it should be done by the machine.


The house building analogy is pretty good and I think that frameworks are pretty close to how house building works right now. Pre-fab is like using cookie cutter components and throwing them together into a website using something like Wordpress or Squarespace. On the other hand, frameworks are like using common tools, foundations, scaffolding, and (literally) frameworks to build a custom architectured house.

Of course, building your own home is possible but that could be a shack made up of sticks and mud or a stack of bricks and mortar with no base/foundation which can fall apart at any moment.


> On the other hand, frameworks are like using common tools, foundations, scaffolding, and (literally) frameworks to build a custom architectured house.

This is exactly the problem, you are trying to use a standardised blurprint, to build a custom architectured house. Where you should probably make a choice here, either go standard or custom.

> Of course, building your own home is possible but that could be a shack made up of sticks and mud or a stack of bricks and mortar with no base/foundation which can fall apart at any moment.

Why would a highly skilled builder build a shack of sticks and mud, or a house without foundation? Why do we assume that every software developer is an incompetent clown?


I agree with your point but the analogy fails in that prefab house elements does not add complexity in the way software frameworks do which is the main problem in my opinion.


> To say that programmers shouldn't write pure code without an existing framework (...)

That's not what's been argued.

The overall thesis is that developers end up using higher-level abstractions, and between wasting time writing and debugging your custom high-level abstractions and simply adopting a feature-complete, stable, and standard high-level abstractions, you'd have a hard time trying to justify wasting time rolling your own when others already did all that work.


> between wasting time writing and debugging your custom high-level abstractions and simply adopting a feature-complete, stable, and standard high-level abstractions, you'd have a hard time trying to justify wasting time rolling your own when others already did all that work.

That's only true when all the abstractions you need have already been implemented by others. Most of the time though, in addition to off-the-shelf abstractions, you need to build new abstractions which are specific to the business problem at hand; and shoehorning those abstractions on top of an existing framework which was not created with them in mind may be more work and may provide a less robust, way more complex solution.


> That's only true when all the abstractions you need have already been implemented by others.

Not really. More often than not, any framework already ships with far more features than the ones you'll be using, with the added benefit of already being production-ready and extensively tested not only by the maintainers but also by everyone else already using it.

When you roll your own... Well, good luck.

> Most of the time though, in addition to off-the-shelf abstractions, you need to build new abstractions which are specific to the business problem at hand (...)

You're confusing implementing higher-level abstractions which allow the low-level details to be ignored with actually implementing your own business logic.


> You're confusing implementing higher-level abstractions which allow the low-level details to be ignored with actually implementing your own business logic.

The grandparent definitely isn't, and that's a bit of an uncharitable interpretation. I'm sure there must be a tiny minority of developers out there that doesn't ever have to write higher-level abstractions, but in the real world, writing both "business logic" and higher-level abstractions is the norm for most jobs. Only toy projects and the most menial development jobs escape that.

Even frontend, which someone erroneously claimed above that is "99% isn't custom" really is. The two major libraries, React and Vue, and the rising one, Svelte, really aren't frameworks at all, and require in practice not only a lot of custom stuff but also higher-level abstractions. Again, the exception is toy projects and menial dev jobs.


Frameworks can be limiting, but I've never seen a framework that doesn't allow you to implement business logic on top of it. At the end of the day frameworks execute code that you write, and you can break out of the concepts your frameworks introduce and do whatever you want.

Just because 10% of your codebase is uniquely unsuitable to the 'standard' introduced by the framework doesn't mean that you can't use it for the remaining 90%. And even for that 10%, it's probably not so alien that some of the functionality provided by the framework is still relevant.


It's really just a cost-benefit analysis of whether it's worth it to build your own solution or build on top of somebody else's. I think a few years of experience gives everybody an idea of the general tradeoffs of each approach, some people will just tend to lean one way while others lean the other way


Not op but I have to disagree with this sweeping assessment. I think there is room for both approaches.

I for one believe that the best solutions I've produced over the years were things that side-stepped the popular or established frameworks and were more customized and focused solutions. They gave the business an undeniable edge over the cookie cutter competition, and by being well versed in the full stack, my team was able to implement new features at a fraction of the cost.

Anyone who knows the underlying technology can pick up our work in not much more time than learning any off-the-shelf framework simply because our solution has fewer moving parts and the relationships between the business logic and the bedrock are direct and intuitive.


> I for one believe that the best solutions I've produced over the years were things that side-stepped the popular or established frameworks and were more customized and focused solutions. They gave the business an undeniable edge over the cookie cutter competition, and by being well versed in the full stack, my team was able to implement new features at a fraction of the cost.

Yes but that's different from being a cowboy coder or a cynic who rejects the concept of frameworks in the first place. You built a new framework customized to your business needs. To do that, you needed to understand the context of the business and your tech stack, understand why existing frameworks didn't fit, and build a new one that works well. It's a good example of not being a cog, which is what I was encouraging in my comment.


That's just it though - I wouldn't call anything we've built a "framework" at all. All of the projects were much simpler at their core than most frameworks I've ever dealt with.

Of course, "framework" might have different definitions for different people. To me, by definition, a framework tries to be more generic than a custom solution, has more features that appeal to more than one use-case, and is designed to be flexible and fit multiple projects.

The solutions I've had major success with were none of those things because the benefits we got (deeper knowledge of our stack, faster iterations, and features no one else could touch) way outweighed any perceived benefits of using something mostly off-the-shelf.


I for one believe that the best solutions I've produced over the years were things that side-stepped the popular or established frameworks and were more customized and focused solutions.

No one can judge their own code. We all think we've written great stuff that other people hate working with. Even your team can be bad at judging it, because they can reach out to you if they get stuck. Those are awful ways to judge how good code is.

In my experience the best measure of good, robust, well-written code is the opinion of the team maintaining it a year after you've left. If those people are singing it's praises and happily working on it rather than complaining about tech debt and saying they want to do a rebuild, then you can say you did a good job.


You're confusing a developing thinking their code is well written with a tech team evaluating the value, reliablity and flexibility of a system.

The 'common framework' approach works great if you've solving a common problem. That's why we use libraries for certain modules or functions of a system. When your business is, even just operationally, different in some way, bespoke is often the best way. Make it good and it doesn't matter a sniff if it's an established public framework or not.


> When your business is, even just operationally, different in some way, bespoke is often the best way. Make it good and it doesn't matter a sniff if it's an established public framework or not.

There was a post recently on hackernews talking about how startups have an "innovation limit". That is, if you try to innovate too much, your scope grows beyond your reach. I think the idea applies here as well. Building everything yourself takes a lot of time. If you can position yourself to leverage the advancements of others, you can move much faster.


Absolutely. I am a big force in my place of 'just buy' when others are all 'build everything in house'. If there's no genuine advantage of having something custom then don't do custom, buy something that works.


> When your business is, even just operationally, different in some way, bespoke is often the best way.

most business think themselves different, esp. in their IT infrastructure. It's closer to hubris imho.

If you have an internal/back office app, it doesn't require a custom UI framework to be created.

The only place where a custom framework should be created is if the domain of the software is custom.


I'm not talking about back office admin systems. I'm talking about genuine business operations systems. And YES, most are more different than they appear on face value.


I'm not judging my own code. I'm basing my statements off of the results-oriented perspective of the businesses I've helped. And based on their positions in the market, both at the time my team implemented the projects I'm referring to, as well as long after most of us have moved on and the projects are in maintenance mode or retired, I would objectively say we were highly successful.


> I'm basing my statements off of the results-oriented perspective of the businesses

there's different criteria for judging. Business criteria and business success is one (good) criteria, but there's also the criteria of good craftsmanship (which may contribute to business success indirectly by making future maintenance easier). This criteria is hard for a non-technical business stakeholder to judge, but easy for the maintainers.


People often seem to overlook how different businesses operating in the same space can be purely because of a particular specialization or culture.

Custom is best sometimes. And that's ok. A well thought though and implemented 'bespoke' framework (literally just a system) has no value based disadvantage over a public, well known, framework / approach.


That's some seriously self deprecating attitude, that can never be a healthy working environment.


They gave the business an edge until someone else had to use your framework instead of a popular well documented one


> If everyone is writing "raw" JS/HTML/SQL/Java/whatever, they are actually building their own individual, unstandardized frameworks.

It depends.

If you have a nail, and you are using a jackhammer of a framework to pound it in, every developer that comes after you has to learn how to use that jackhammer unless they have previous experience with it. And honestly, most complex frameworks have so many different potential usage patterns that future developers will still have a heck of a learning curve. (I've never seen two uses of redux that remotely resembled each other! After using redux for 3 years I can easily find redux code that I have literally next to no idea what it is doing!)

If you use a hammer, even if it is a bespoke hammer, future developers just have to learn how to pick up that hammer.

Obviously at some point it pays to switch to using an Industry Standard(tm) solution, but for many problems plain old JS or HTML may very well be appropriate.

For example, I was making ad campaign landing pages for my now defunct startup. To ensure speed I wanted them to all be static HTML. I could have used a static site generator of some sort, configured templates and all that jazz, or, instead, I could just write HTML + CSS for 3 landing pages. (Actually I only had to write the CSS once).

Now if I needed to programmatically generate dozens of landing pages, then sure, adopting some static site generating framework may have been worthwhile. But in my case it wasn't. Raw HTML was the correct solution.


> To ensure speed I wanted them to all be static HTML

Feels like people shoehorn Gatsby too much into those kinds of situations. Bless you for keeping the dream alive - every time I see well written and small HTML/CSS/JS code on a landing page I want to scream.

Despite knowing React and being familiar enough with Vue, Angular etc. I still get imposter syndrome when I see a site's markup and see weird artefacting that comes from some framework that I have never heard of.


aka, right tool for the job.

there's no mantra that should be followed without thought in software engineering. Frameworks helps in a lot of cases, and should be used - but this decision require someone with experience to make, and this experience has to have been hard earned from previous failures or projects.


I don’t think you have a lot of experience with long software projects. The pros of frameworks are outweighed by their cons pretty quickly. This effect is accelerated by shorter support times and an ever growing number of frameworks.


Maybe depends what tech you work in? In the Java and .NET sphere things are pretty much stable over many years. We hardly had to do any work to to move our systems (quite large ones) across the same well supported frameworks to new versions the past decades and the fact everyone in these ecosystems know the popular frameworks well and the design patterns, it scales well to many and new people.


Do you have any examples of web apps or websites with millions of users, and don't use any frameworks? The only one I can think of is HackerNews, funny enough.


The more users, the less likely you are to use an off-the-shelf framework.

Netflix: https://netflixtechblog.com/building-the-new-netflix-experie...

Facebook: https://blog.risingstack.com/the-history-of-react-js-on-a-ti...

Edit: find out what framework (although it didn’t work on some urls I tried): https://www.wappalyzer.com/


I may be wrong since this article is from 2018 but it seems like Netfix uses React for everything but their homepage [1]. Facebook uses it's own framework since, well, that framework is React. But their framework isn't some special-purposes framework built for Facebook's purposes either, it's a general purpose framework, used by tons of companies including Netfix. So I don't think many web tech companies are going to find success building their own special-purposes front-end frameworks, like some earlier comments seem to suggest.

[1]: https://medium.com/dev-channel/a-netflix-web-performance-cas...


It's really more about the size of the company -- those companies have enough engineers to have large teams dedicated to their various frameworks and build systems.

Some of the popular open source frameworks are derivatives of what these companies had already developed internally for themselves, e.g. React.


I think that the decision on whether to use a framework is very hard.

When using a framework, you get most of the benefits right away, and most of the costs happen much later, when abandoning the framework is costly.

Besides pure technical quality, you have to consider more murky issues: who wrote the framework? how long will they be around for? Does their problem match your problem? Will your interests remain aligned in the future? Is there a succession plan? Will new hires be familiar with it? What's the learning curve? Does it integrate with the other N tools you are using?

Ideally, the decision to use the framework or not should be made by someone who has used it before. This is often not the case.

My personal preference is to always use simplest tools that get the job done.

I wish that hiring technical writers to write docs was more common. Most of the new libraries I have to use have documentation that is so poor, I have to use stack overflow + write test cases.

Currently, I am in the middle of taking over a go code hairball from someone who left, and who loved external dependencies. Out of 31 direct dependencies, 11 are not at release 1.0 yet. Many have poor documentation. I believe that when I am done with it, our code will have fewer dependencies, and be half the size.


And how to avoid ending up with a:

"General-Purpose Tool-Building Factory Factory Factory"?

"Why I Hate Frameworks": https://www.gwern.net/docs/cs/2005-09-30-smith-whyihateframe...


I agree with most of this, but

> it raises the barrier enormously for onboarding new people.

is.. a little ironic. I've never met any of the gazillion frameworks out there that have made it easier to get people up to speed, despite promises to that effect.


> Why are frameworks popular?

Because they make it very quick to get a 70% working system that you can sell to people (internally too) before the people with the capacity to get a 100% working system gets their first version.


In other words: they make sense for making prototypes.


And then your prototype gets into production, and you can never rewrite it.

So, no, they do not make sense for making prototypes. They do make work when you want a 70% solved problem and won't care about never reaching 100%. On the other cases, you still need to use them, because if somebody gets those 70% first, you will have problems, but they make no sense at all.


> Your job is to produce software that meets the requirements of the business

In “raw" html/js/sql/Java.


Honest question - how does one provide value in this reality. It’s something I’m trying to understand


The business itself is like a computer process. There are flows of information and resources. There is state. There are subroutines. There are bottlenecks and bugs and wasted efforts and errors that may or may not be getting detected or mitigated. There are various intents that customers and stakeholders might have that are unreasonably difficult to fulfill under the current layout of things.

You add value when you optimize, debug, and refactor the business operation. Increasingly in old-economy companies, but especially in Silicon Valley companies, the business operation is largely defined in code. So this is something software engineers can do, through well-chosen code changes.

Outside of startups, engineers rarely do this alone. There are executives, business analysts, product managers, data scientists, etc. with very similar tasking. But you want to get into a position where these people see you as a full intellectual partner, not as an implementation resource to type in their already-baked ideas.


Start simple. Educate co-workers on why a specific approach is good. Build culture. Slowly but surely you should all be working towards the same goal. It's a slow process but with many moving parts. Take on a single piece at a time. That's the only thing that works.


This is the worst type of working environment, when everyone on the team is trying to "educate" and evangelise and "build culture" which is nothing but sucking up to the manager, instead of trying to be realistic about solving issues and delivering value.


At least if I'm understanding your question properly, frameworks only provide the frame, you have to fill in the rest


His argument was more that that there are hose-operating-tooling (and tooling for that tooling) that many fire departments are using but if you want to adjust the nozzle aperture well that's not implemented yet and you need the schematics


> Why are frameworks popular? Because they provide a standardized way for multiple engineers, and multiple engineering teams, to collaborate effectively on the same product -- and they make it easy to hire people who already know the frameworks.

Hmm, i'd say that there are frameworks that actually make things easier and better, and some that are much like the incomprehensible Rube Goldberg machines that the previous poster complained about.

For an example of this, you needn't look any further than the Spring framework in Java, perhaps the ultimate example of enterprise bloat of poorly understandable (unless you spend all your time digging into the internals) and overly abstracted non-solutions for problems, and that's even before you pull in 50-100 dependencies for making your dated monolith do whatever the business wants.

There is an absurd amount of XML configuration needed, except that sometimes you need to mess around with annotations in your source code as well, except for those cases when instead you'll want to just overwrite some class with your custom logic inside of the methods, which will be initialized by some Eldritch abomination of class loading and reflection, or maybe you'll need to change a bunch of properties in a config file, with all of this leading to long and arduous error messages that will make you waste hours digging through Stack Overflow.

I was in charge of migrating an older Spring app to Spring Boot, which attempts to alleviate some of these issues (and there was also the fact that the particular Spring version was EOL at that point) and after wasting more time than i'd like to admit of an error after an error after an error, it was decided to just update Spring itself as far as possible and let it chug along in a barely alive state.

The framework wasn't a way of managing risks or "getting things done", it ended up being a risk in of itself and was a massive pain once the system needed to be supported, maintained and also have new functionality developed for 5-10 years or so. That's not to say that there aren't better frameworks out there, but there definitely are some that will be more trouble than they're worth, unless you want to have a person or two full-time just to wrangle them in order and continuously maintain and update them, which for many of the smaller orgs out there is unacceptable.

> If everyone is writing "raw" JS/HTML/SQL/Java/whatever, they are actually building their own individual, unstandardized frameworks.

I've also seem bespoke frameworks be used and those were generally worse - while there was the occasional good idea in them and they could be decent to work with in a limited set of circumstances, getting help when something goes wrong with them was impossible and you were stuck reading the source and mucking about in it. Why is that bad? Because with the larger and more popular frameworks out there you can instead look up solutions for problems or integrations that someone else has written.

I'll admit that pulling in dependencies carries certain risks in of itself, but you'll very quickly realize how nice it is to let someone else step on rakes before and learn from their problems and mistakes, instead of attempt to translate code comments from Lithuanian to your own language when prod is running into issues, because the original devs are long gone and there is no documentation, nor examples of how to use anything in place.

Thus, it is better to use frameworks that have tens of thousands of collective man hours put into them, or even more, as long as they don't suck.

So, in the end, it's probably better to use frameworks that keep you closer to the code that is actually running, like Spring Boot (where at least parts of the config can be done in plain Java), or even look at ones like .NET (the newer ones are a bit like Spring Boot), rather than DSL hell.

You can put a breakpoint in your Java code and see why something isn't initializing properly. You cannot put a breakpoint into your .properties, .xml or .yaml file (or even feasibly find where to put into the over-abstracted configuration read mechanism, or find how it connects to the bit that is actually initializing your data).


> Why are frameworks popular? Because...

That's an extremely open question, and I wouldn't agree that frameworks, in any way, "provide a way for people to collaborate" any more than a whip does. They seem to me to limit the ability to write the code you want for no value other than providing confirmation bias.

The idea that the same thing being outputted on the screen will have two implementations by two uncommunicative engineers that vary in such a way that they become impractical, impossible or even at all harder to debug, is simply a myth. I highlighted the word "engineer" because that's the key point - if you are hired to solve a problem, you are probably going to solve it the easiest and most convenient way. You are trusted to write code, not use an API. I find it hard to buy the idea that 20 year old aspx code is any easier to debug than pure JavaScript written around that time.

As far as why frameworks have become the norm, I would personally put it down to, in no particular order, the following ideas:

- Some are genuinely useful - functionality is limited in the parent environment's API and must be implemented in some way. This is rare, in my (mild) experience.

- The company had learned the lesson that contractors (especially ones that win the bidding wars) write horrific and unmanageable code. Therefore, it decided that a cost-free way to alleviate this issue is to force all engineers, permanent or on-contract, to use a specific framework.

- The above point spread between companies as people (directors of engineering) shifted around, word spread that someone was able to make significant cost savings etc.

- They are in place to allow one to follow idioms better. I recently interviewed for a company that followed, quite closely, the clean architecture book by Martin Fowler, with their own internal framework. They had a good plan in mind - they wanted components to be interchangeable, usually at compile time, and needed to ensure that they followed procedure in order to write code that was manageable at scale in order to achieve this. They made their own framework and have used it for 10+ years. This proper, elegant use of a framework is very rare.

I do not mean to say that frameworks are useless - however, JQuery has always been absolutely worthless to me in my personal endeavours - which often included quite bespoke DOM manipulations - the exact place that JQuery is meant to shine. To me, it is a great example of something that was phased in as a Band-Aid solution to people writing terrible code as they had no ownership of the product.

I will admit that I use little helpers, such as

  function $(usuallyASwearWord){
      // Yes I really use element by id for this when writing my own fun code. Not query selector. 
      return document.getElementById(usuallyASwearWord);
  }

and I wouldn't be using these if JQuery never exploded in popularity - but it's just a skeuomorph. A cute little oddity that has some minor, pointless, boring history/flavour behind it. I don't think that JQuery has provided the world with any significant net effort savings, and that it is far from the only framework that will have this legacy.


I've started and quit web dev projects about 4 times in the past 5 years. I'm a distributed systems dev. The complexity of the recommended tool chain is mind boggling (and I consider my day job incredibly complex, of which I have a decade of experience doing). I would have said I hate React.

Finally started another project. No frameworks this time. Started with html and single js file until I needed more.. and solved the problems step by step. I'm at the point that I now see the benefits of react, but only by doing it this way could I possibly learn why.


>I'm at the point that I now see the benefits of react, but only by doing it this way could I possibly learn why.

That's why I feel bad for the new generation of developers. When I started developing in 2000s it was enough to know low level abstractions, some design patterns, etc. Along the way I've seen the context in which for eg. MVC was revolutionary, and all the latest frameworks actually solved a lot of real problems (SQL query in PHP garbage all over the place on backend, jQuery/MooTools/ExtJS/Coffescript/AngularJS, supporting IE6, etc.)

A lot of modern tools make sense to me because I know the context they came out of. I feel we've done a shit job of preserving this context and lesions that lead to current frameworks - this will eventually lead people to rediscover all the mistakes that were hit in past, but also saddle us with legacy burden because it's hard to distinguish what's relevant and what's a result of historic constraints.


I have completely given up on web dev these days.


Backend is worse, they’ll make you use the Spring “framework” that makes React look like a dream.


We have a basic template that when empty, takes about 20min to compile on a very new laptop, builds to a 200MB+ jar.


Something sounds wrong, but I don't know exactly what you mean by "basic template"? Like a Spring boot example app?


A couple annotations and a base application class and you're up and running with a REST service. That's considered hard?


Translated to React: a single render function and you're up and running with a webpage. That's considered hard?


Spring is a simple MVC framework, it follows the same patterns as similar Ruby, Python, C#, PHP etc frameworks. React very much blazes its own complicated trail.


That statement is beyond the absurd. The API of React is several orders of magnitude smaller than Spring and similar MVC frameworks with the same feature. There is even a React replacement with feature parity, Preact, that clocks around 3kb.


Spring is a powerful but pretty complicated DI framework. That MVC stuff is also there but for many use cases (microservices, batch, CLI) it’s not really helpful.


Well, nowadays anyway. It's come a long way from the point where an AbstractSingletonProxyFactoryBean seemed like a viable solution to a problem in the framework.


One major problem React solves is HTML injection. It used to be an interview question to work at our place, and since React, developers don’t often realize that old frameworks like Freemarker didn’t escape HTML. By default.


I remember html escaping being a problem in the early 2000s, but promptly solved. Its been solved in every language for over a decade. React didnt do jack to solve it because it was already table stakes and had been for over a decade.


Genuine question, what do you mean escape html by default?


If someone submits a text field with the contents `<bold>Injection</bold>` that will then be reflected on the site (i.e. a comment section), React will display the literal string, rather than bolded text.


Ah I see! Thanks


This is what I have always really disliked about many web development frameworks. Their goal is to either a) do things magically in a specific way or b) do so with a bunch of configuration files. Spring comes to mind as a particularly egregious case that does both. Spring does things automatically, until you need to add a configuration file for every change. You need to add this wrapper and this library for this other case. I have always wanted a web development framework that works more like a game development framework. Where you still code everything you need but it doesn't hide the part where you make a window or add a listener. Which makes it a lot easier to find where things are going wrong and how to change the configuration as opposed to go to stack overflow and learn you need to add this dependency which then does things behind the scenes.


This is actually why Rails was so successful. The world of Java was frameworks full of configuration files and Rails went with the "convention over configuration" approach.

You jump from Rails project to Rails project and they all work the same way. You know where everything is immediately. It's fantastic for teams and long term maintenance between different devs because the learning curve of each project is virtually non-existent.

But Rails has gotten bigger and more complicated over the years, which makes it tougher to make the case.


> You jump from Rails project to Rails project and they all work the same way. You know where everything is immediately.

Except lots of projects today use very diverse structures and the meat of the code is often on complex libraries like ActiveInteractors, Trailblazer Operations, stuff from the Dry-RB ecosystem, etc, or sometimes (hopefully) simple Service Pattern classes, if you're lucky. Sometimes all of them at the same time. For lots of companies, Rails projects in the real world has ceased being easy to grasp for a while.


Yea, this has been my experience as well. That's what I meant by the "bigger and more complicated".

Certainly set the standard for a long time though.


Node.js + Express is what you want. I'm not a Go developer, but it seems to me that the Go ecosystem also has what you want.


can you give more details? My understanding is the node + express is really low level providing almost nothing except a set of chainable functions based on routes. To get any actual functionality (storing to databases, authentication, users, sys admin, etc requires writing everything yourself.

The reason I like batteries included frameworks is I get all that for free and ideally it's already following best practices so I don't have to worry about security, about account recovery, about versioning new data/features, about scaling, etc


> don't have to worry about security, about account recovery, about versioning new data/features, about scaling, etc

with the exception of security (i take it you mean authentication), everything else is unlikely to be provided by a framework like spring directly - you'd have to choose something to provide it or write it yourself.


Rails provided many of those things. I believe Django does. Meteor used to provide many of tbem


I tell every junior colleague two things:

1. There is software engineering, and then there’s working at the digital factory.

2. A good way to set yourself on the path that leads to the former is to get intimately familiar with the HTTP protocol. A good way to start is to reject fancy frameworks, and to get comfortable reading RFCs.

Being the guy who really knows HTTP is a powerful career move. Becoming that person almost inevitably kicks off a virtuous cycle that steers bright/motivated devs well clear of writing REST endpoints for the rest of their working lives.


I don't think I can really articulate why but this smells a lot like the "real programmers use [neo]vim" or "real developers write C++" kind of gatekeeping that was pretty widespread and even expected/accepted a few decades ago, but really has no place now.

Being intimately familiar with HTTP is fine, reading a lot of RFCs is fine, but there are plenty of well respected, well compensated, influential engineers who do neither, and have no need for either one.


I see what you mean, but you’ll note that I’m neither saying that it’s the only way, nor am I saying that it’s required. What I’m instead doing is trying to cultivate a deep understanding of one’s tools and protocols, which is absolutely the hallmark of a competent engineer.

I concede that this may superficially resemble the aberration you describe, but insist that it’s actually very different.


My experience is that to do any kind of engineering and most kinds of science really well, you need to have a deep understanding one layer down and one layer up.

The HTTP comment seems off-base. HTTP is neither particularly important or interesting as a protocol. You can spend a half-hour with netcat and have a few conversations with your web server, and you're set.

The concept more broadly is spot-on. The "I don't need to know [x] to do my job" crowd is universally mediocre at their job.

For a programmer, the set of things one layer down is important, be that cache/memory hierarchies, low-level programming, operating systems, databases, compilers/interpreters, or otherwise. So is one layer up, which is usually business and product understanding.

I'll also mention: Mediocrity at one's job is okay. There are plenty of wonderful people who have work-life balance and do interesting things outside of their job, and just want a pay check. There are people happy to be working on systems I'd be miserable working on -- payroll, banking, inventory, etc. -- and I'm glad they're around. A few of them are among my favorite people too.


Respectfully, if you think http is unimportant or uninteresting, then you don’t know it very well. It’s exceedingly interesting from a linguistic point of view, to name but one; the notion of verb + complement of the object is fundamental to http, and sheds light on where it tends to be the _wrong_ choice. This inevitably leads to non-trivial design discussions.

And comment about netcat happens to be exactly what I’m condoning.

As for mediocrity, sure. But it should be voluntary, not imposed by a lack of options.


I think this: "Respectfully, if you think http is unimportant or uninteresting, then you don’t know it very well" is fairly presumptuous.

HTTP has no interesting theoretical basis, and doesn't really generalize to much. Like most early internet protocols, it's kind of random and ad-hoc, with incremental hacks and improvements like keep-alive added later. You can pick up any RFC -- POP, IMAP, or whatever, and get just as much out. That's not nothing; they're worth knowing, but here's a short list of protocols and data standards which which have a theoretical basis which will teach you something deeper:

- git. Brilliant data standard.

- SQL. And relational database theory.

- SGML. Yes, the original. Much juicier than XML in design.

- Cryptographic protocols. Pick up a copy of Applied Cryptography, and read the chapter.

- A survey of the early RPC and object embedding protocols (OpenDoc, OLE, etc.)

- TCP/IP

- Any of the various compression and/or error correction protocols.

- Printer standards, especially PostScript

- Video, audio, or image compression standards.

.. and so on.

Some of those require months of study.


I would say that mediocrity is okay as long as you don't force others to follow it. Sometimes I feel as if there is some sort of "militant mediocrity" happening in some corners of our profession, where anyone even remotely interested in digging one layer down is not only looked down as not being enlightened enough, but also committing a capital sins of acting against the business interests and walking outside the predetermined boundaries of their programming-caste. Cue horror stories of cowboy coders and NIH.


That's a good way of looking at domain knowledge "one layer up". I'm have been terrible at this in the past and I have been making an effort to improve. Business knowledge seems to provide the most leverage, especially in fields like finance.

My problem is I find the knowing the lower level stuf way way more interesting. Heck I'm doing a udemy course on network programming which arguably won't help my career, I just have a secret shame about not really knowing networking. (Plus it's really interesting)


If I'm hiring a SWE that will touch anything web I expect them to know the basics of HTTP and to be capable of referencing/digesting an RFC. It's a common interview question for these roles to outright ask, "how does the internet work?" gesturing to a whiteboard. It's a reasonable stance to take.

Overall I agree with your gate-keeping comment - that's how people interview/hire. I disagree that it "has no place" - if I'm hiring a mid-to-senior engineer that's web-capable I am 100% gate-keeping to make sure they have an intimate understanding of HTTP + are capable/comfortable reading RFCs.


> how does the internet work?

It's like a series of tubes. That's why SWE work involves so much plumbing and getting deep into all sorts of $#!+.


While Internet is a series of tubes, SQL is a sewer: You need a lot of layers to interact with it because it’s very dirty. Real engineers have Liquibase over Hibernate and completely hide SQL from developers. Also it’s extremely slow, I don’t know why old people like it. That’s why NoSQL works better when you have large databases, like a thousand documents or more.

Also, React is not slow at displaying tables. If it’s slow, you can use plugins to hide cells which are not in the viewport.

(Old farts like me manage a billion records and display tables of 7k rows x 100 columns in a webpage in mere milliseconds, a joy that React developers will never understand. Why do you need more than 10 lines if you have pagination, they say).


I would say that if a web dev can articulate http verbs, their uses, and how to design a cohesive naming strategy and documentation scheme, that's all they need to be competent. There is no need to read the HTTP RFS for most design tasks. I can understand those requirements for Google, AWS, Facebook devs, but 90% of the work out there doesn't need that level of understanding to make something decently performant.


90% is an exaggeration. Even someone doing CRUD has to be familiar not only with HTTP verbs, but also some headers, response codes, query strings, request bodies, understand how caching works, etc, at some point. The only way to avoid it is only working for years on the simplest of projects.

They don't have to read RFCs in bed before sleep, but they should at least know the gist of what they contain.


All those things that you listed can be learned without reading the RFCs though, no? But yeah, I agree with the list you wrote. That's what I had in mind.


That's why I wrote "They don't have to read RFCs in bed before sleep, but they should at least know the gist of what they contain."


I bet this is a poor juniors interview out there:

Interview 0:

"I expect them to know the basics of HTTP and to be capable of referencing/digesting an RFC"

Interview 1:

"All devs should be up to speed on the mysterious world of character sets, encodings, Unicode, all that stuff"

Interview 2:

"Any real programmer can reverse a string in C without a buffer"

Interview 3:

"I expect anyone working with k8s to understand the cgroup fundamentals."

Interview 4:

"No really. What is html5?"


Why would you need to be capable of referencing a HTTP RFC?

Are you doing something fancy with HTTP?


Would it count as “fancy” to find out whether header names are case sensitive? Plenty of useful things in there.


I certainly don't think that's a thing someone needs to know to be a useful web developer. I've certainly never needed to care, although I wouldn't classify myself as web in particular.


I've never needed to look up if headers were case sensitive, it's just never caused an issue in my 10 years experience.


Referencing, no. But being able to look up the answer to an tricky edge case in the RFC is a valuable skill


Thats a really odd set of things to gatekeep on.


What, fundamental knowledge and capability?


Fundamental for those whose job is building web servers, sure. Id expect nothing less from the authors of apache, nginx, etc.


Well respected and compensated engineers usually have a depth of experience that extends beyond their REST framework. That’s what the parent was advocating.


> real programmers use [neo]vim" or "real developers write C++"

Real programmers know how to do both even if the don’t do them exclusively.


The idea that you're not a real programmer if you can't use a specific editor is idiotic.


The idea that not knowing something makes you smarter is idiotic.


Feel free to quote where anyone here is saying not knowing something makes you smarter.


If reading RFCs is a gate, then it is a pretty bad one. They are public, free, well known, and generally well written.


Also unnecessary to read for 99% of webdev work.


Why is gatekeeping always considered to be a bad thing?


Gatekeeping is not always a bad thing, but OP implemented a clear sample of the No True Scottsman fallacy to draw a rather arbitrary and mostly useless line to put themselve in the imaginary "real programmers" group.


It does feel like the barrier to entry for that kind of deep knowledge career is increasing these days.

Maybe it's because I live outside the United States, but it seems like it's much harder to build a "pathway" into work that demands that intensive, deep understanding of engineering on computers. At least one that'll pan out in my 9-5.

The supply of problems that require deep knowledge vs framework Kung Fu... my impression from ground level is they're tilting more towards GitHub martial arts every year.

The supply of companies outside the United States with a deep interest in these topics is sparse, to say the least.


I find the best path, for me, is to be the guy who knows HTTP pretty well but I don't let anyone else know. Once they find out I know things I get tapped to fix all the old problems instead of build new problems, then I have to leave because the work becomes miserable.


Yes and I tell junior developers the only way to be good is to know assembly language like I had to know. /s


This strikes a nerve with me. Lately my work has started taking apart our website and shifting functionality to 3rd party vendors in an effort to limit liability. We're going to end up with something far less versatile, uglier and ironically less secure than what we started with. My focus is shifting to a bunch of API stuff that I just don't find very rewarding.


I feel you.

In my role as a software architect in a small company I've been booting out various "frameworks" and overly invasive libraries, because it turned out they are not delivering much, while requiring attention and forcing us to bend our code to satisfy their demands.

First ones to go were various cloud libraries: AWS, Google Cloud, CloudStack. It's a plain REST. I estimate that the code that talks to Google Cloud directly is shorter than it was when the library was involved.

Docker went (it's a REST with a few quirks). Kubernetes client went (same). GitHub client went.

gPRC went (fortunately we're not required to talk gRPC anywhere except a few endpoints, and it turned out that Protobuf is not hard to implement if you don't have tons of schemas, and gRPC is HTTP/2+Protobuf).

Finally I booted Mongo client library, because all we needed is a few requests, and implementation of Mongo wire protocol + BSON ended up costing us ~300 lines of code.

Remaining libraries earn their keep: there is cryptography, XPath, JSONPath, compressors, userland IP stack, complex serialization formats. I thank the authors of these libraries, and due to some reason these libraries are also the least complicated and most compatible version-over-version.


When you booted the Mongo client library what did you use in it's place? Were you connecting to a cluster or a single node? Lightweight clients are always interesting because of the trade offs they make.


Every time I see a framework how quickly they can bootstrap I now consider that a red flag. The more automagic, the more you can't ever figure out why something goes wrong. It's been said that most products spend 99% of their lifetime in maintenance. If a framework wants to impress me, show me how clear the stack traces and IDE integration are. Show me how crystal clear it is to trace the execution path.


What are you talking about?

What's not to like about the SimpleBeanFactoryAwareAspectInstanceFactory [1]?

[1] yes, this exists


That I couldn't find the advanced one or the generic legume factory and ended up having to write my own as the owner wasn't currently accepting pull requests?



Writing raw anything is not necessarily your job.

However, a lot of people have an interesting view of things like frameworks, ORMs, and other abstractions. To some, it feels like pure all-or-nothing; but what if it didn’t have to be?

Even the biggest hater of SQL database abstractions will admit there are reasons why someone would want them. But I wonder if there is an answer somewhere in the middle that could go both ways. For example, sqlc[1] provides basically all of the type safety that overbearing database frameworks might, but it lets you write plain old SQL just as you normally would, and integrates cleanly with your existing migrations solution. It doesn’t solve everything, but I have a feeling more people would like tradeoffs like these: an enormous amount of complexity does exist, to say, parse SQL and emulate DDL transactions for the sake of typing, but it stays entirely in the tool itself, and the resulting code and interfaces are completely simple and obvious code.

I believe that there are multiple better middle grounds waiting to be discovered in any given niche where the options currently feel like barely-worth-it trade-offs.

[1]: https://sqlc.dev


> Even the biggest hater of SQL database abstractions will admit there are reasons why someone would want them.

There is only one reason, and it's that people don't understand how to use SQL, and that's because it's not taught in computer science degrees, and this also has strong ties to the battle between sun and oracle.

And this makes database developers expensive and hard to hire.

There is no reason to use an ORM if you know SQL and database modelling. The relational model is by far the most powerful tool to solve CRUD use cases, which "web development" is, and that's just simply a fact.

What the ORM will do for you is automate some boilerplate mapper code, which might be a bit boring to write, but is very simple. And the downside is that it will make all of your database modelling, querying, access etc orders of magnitude more obscure and complex compared to just using SQL. And now you will waste enormous amounts of time fighting with the framework to debug the ORM and find out which kind of weird SQL statements it generates, where you would have complete control over this if you just wrote your own.

The obvious solution that would benefit everyone, is to simply train your developers to use SQL, if they don't already know it, and let them take a course if they need to. Not to slap on some stupid framework and make a huge unmanageable mess of the whole system.


> There is only one reason, and it's that people don't understand how to use SQL

Unfortunately, I immediately think you have misunderstood. While ORMs and other database abstractions may be a boon to folks who don’t know SQL, or at least some may think that, it is not the only reason that people use them. It may not even be one of the bigger reasons.

Type safety is first and foremost a great example. Wouldn’t it be great if you could literally not ship SQL code that has typos, syntax errors, or other query errors? Unit tests catch those, but you have to actually write them, and they still can’t help if you hold things wrong. (Like using sprintf for SQL queries, god help us. Yes, that is a mistake no professional should make. But if you add enough layers of complexity, you might just fool someone who knows better into doing basically that with extra steps. There are some ways to combat this of course, like PEP 675[1].) In some environments you might accomplish this with compile-time linting or advanced metaprogramming. However, ORMs also do this right out of the box generally, at least in languages where it can be done.

But even then, that doesn’t scratch the surface of what’s possible. Mind you, with sqlc, we’re still talking about making raw SQL queries. But those SQL queries contain tons of information that gets duplicated by the programmer. For example, it is possible to statically know what types of inputs and output columns any given query will have based on just the schema and the query itself. sqlc gives you this information. It generates correct wrappers for SQL code using only raw SQL queries as input.

This is not the only benefit of abstractions of course. And sqlc doesn’t give you all of the benefits of an ORM. However, that’s exactly what I’m trying to say: a lot of people have an idea in their head about what an abstraction must entail, but there’s probably better tradeoffs that are genuinely not susceptible to the same issues that you’re imagining. Case in point, I have a feeling you interpreted my post to only be regarding ORMs, but even the text you quoted is more general—I said “database abstractions” for a reason :)

[1]: https://peps.python.org/pep-0675/


I disagree.

How would you be able to validate data that you can map to your tables without hitting the db?

If you wanted to add a new row of data to one of your tables, you would have to hit the db, get row columns and make sure that data fits.

Or you could write out your table structure separately outside of the db, and validate against that, but that would just leave you with two sources of truth which is an antipattern.

Or you could just use an ORM.

I'm fluent in SQL, and understand the solutions that ORMs solve - it's not just a pure abstraction over sql queries, but the tooling and utility provided wrapping around that is what makes it so powerful.


I’ve known SQL for close to 25 years. Composing queries programmatically is much easier and less error prone than raw sql when using an ORM.

In the case of C# Entity Framework, the ORM abstraction- LINQ - is a first class language feature with full compile time type checking and auto complete.

The same LINQ query can work across databases and in memory collections including RDMS’s and Mongo. Eventually in any object oriented language, you’re going to end up doing “something” to convert relational data to objects.


> Composing queries programmatically is much easier and less error prone than raw sql when using an ORM.

This is only true for the most trivial of use cases that will not really bring any value to your users. As soon as you get more advanced feature requests, you need the power of the relational model, which is now abstracted away behind a much less powerful interface.

Edit: more advanced, or even just a higher number of orthogonal use cases against the same data.


Have you used LINQ? LINQ is basically just a slight variant of SQL. All data is related, the only difference between a document type model like Mongo or ElasticSearch is how it stored and retrieved.


I've only been introduced to it, but I did get the impression that LINQ is probably the best SQL abstraction, and that it's much better than Hibernate, which is what I have the most experience with.

I think the whole framework situation in general is much better in microsoft land because it's one company that designs the whole ecosystem, and overall experience, and also the people with probably the most experience in software development. I even think VB6 is superior to the standard open source web development stack used today. The only part where MS seems to be behind is version control.


Oh Hibernate sucks. LINQ is built into the language not a bolted on library.

How can MS be behind in version control? TFS has had git support for ages and MS owns GitHub.


I found it significantly easier to set up table structures and migrations through ORMs and abstractions as opposed to raw sql scripts.

Moreover, many ORMs give you the tooling to validate data before hitting the db, which is also another huge plus.

I mainly work with Python on the backend, and I've recently switched from SQLAlchemy + marshmallow + alembic to SQLModel and both have been amazing to work with.

I would never dream of setting up tables through plain raw sql ever again.

If it is the case that I'm moving onto a different language, as Go may be it, I'm willing to learn the ORM, granted that it deals with migrations and validations out of the box.


I get this sentiment. But I think it's not so much that there's a framework but rather that the framework is an I'll fit for the problem to be solved. So you "fight" with it.

In general, it seems like the easier it is to do something in the scope of the framework, the harder it is to bend the framework to do something outside it's scope.

We've all seen this with frameworks that make 95% of the work really easy, and the remaining 5% is like pulling teeth. You're bending the thing where it's not meant to be bent.

(And that's when new frameworks are made. :) )

But if you go lower, under the framework, it's way more flexible. As a dev, you can explore and create instead of fighting. Which is great.

But really we're just on a different turtle. Raw JS has a really hard time with some problems when you try to bend it wrong.

Our job is to take the right technologies for the job and make them work, no matter how abstracted they are. The pain is merely from the wrong tool. And, truly, it sucks when you have to do that.


We use a pretty jank framework at my professional gig, and are thankfully finally beginning to plan a migration to the much better documented framework it rips off. A full rewrite is almost never worth it, but between the already poor and now mostly out of date documentation, and the apparently diminishing community, there's not really a better route.

All that said, every other framework I've ever used in my personal projects has fine documentation and community support, including their peripheral deployment tooling. My experience at work seems like an outlier to other tools I see. Comments like the original post and OP make me think everyone is working in legacy systems based on aged tech from the cowboy era.


Yes, this. There's a reason I avoid complex abstraction layers that helpfully "hide" the underlying platform from me. So, yes, in my hobby project that's a server application in Java, I use a simple, easy to understand micro-framework (sparkjava.com), a simple template engine (pebble), and a homebrew SQL query builder on the backend, and vanilla JS (compiled from TypeScript) on the frontend.

Every dependency is a liability, especially the kind you can't remove without rewriting your entire project. Adding a library to my project is a very serious thing for me.


Company: “We dont want you to write code, because finding ppl who understand the code is hard, while finding ppl who eat generic pulp is easy”


As a recently-retired Magento developer of ten years... this speaks to my very soul. I've sworn to never use Magento again.


> "You don't want to write "raw" html/js/sql/Java" Uhm I thought that was my job.

I mean it isn't your job and it shouldn't be. Your job is to make software handle with consistency some business process or become an extended memory of some business people.


Right. The hard line on this comes from aerospace. If A won't connect to B, you check the spec. If A isn't compliant with the spec, A is wrong and the vendor of A has to fix it. Same for B. If you can't tell who's wrong from the spec, the spec is wrong. This is why you can unplug a Pratt and Whitney engine from a aircraft and plug in a Rolls-Royce engine.

Now, this worked a lot better when aerospace companies and the USAF had real clout with manufacturers. If it wasn't in compliance with the spec, you put a red tag on it and rejected it. The supplier didn't get paid.

The way it works today is that if the supplier is bigger than you are, it's your problem.


And that kind of spec'ing and compliance is ultimately rooted in legal threats: loss of life leading to loss of money.

Until software actively kills people (enough that it costs someone a lot of money), we won't see movement in this area.


To your point, most software comes with a license that says "this comes with no guarantee for any purpose, we are not liable under any circumstances".

I remember reading licenses as a teenager and wondering about their legality.


The problem of software specification in that comprehensive sense is unsolved.


That's how computer hardware used to work: see "IBM PC". I have no idea if it still works that way. I haven't seen the insides of a computer for ages now.

Hardware is constrained by the costs of manufacturing at scale. I can't make my own computer as I want it (can't find a hand-cranked punched-card reader, or a steam-powered dot matrix printer. Oh why).

With software though, everyone can write whatever they like and send it out there were someone is bound to pick it up and reuse it. A little like the way Covid-19 spreads, I guess. No constraints, no respect for specs.


I don't really think a lot of what goes on under the umbrella of 'software development' can really be classified as 'engineering'. Some of it certainly can, but once you reach a certain point of abstraction away from the underlying hardware, it's more like 'composing' than 'engineering', or if you want to have a more prestigious label, 'architecturing' or maybe 'designing'.

https://study.com/academy/lesson/what-is-engineering-definit...


As a software engineer, I never understood this stance.

Engineering is composition. Engineers look in books to figure out which existing construction technique to use for building the bridge. They're there to ensure there is a bridge still standing in a year, and the best way to do that is composition. That requires knowing the tools, techniques and QA requirements, which is what software development is too.

They don't invent new materials and designs. That's what doctors/researchers and architects do. Or computer scientists in our case.


That's a weird definition of engineering. I think you are confusing implementation (construction technique) which is of course valuable, with the more general principle of designing in safety margins, redundancies, and tolerances for specific loads/forces. This is a lot of applied math.


But there's a difference between "I have learned how to use applied mathematics to design a specific physical deliverable while ensuring safety margins and tolerances" and "I am contributing to the state of the art in the general theorems used by my entire industry." Some civil, chemical, mechanical etc. engineers do both! But most engineers spend most of their time on the former.

And what is software engineering but exactly that dynamic? Relational algebra and distributed systems design are most certainly based on applied mathematics, and one uses tooling built around the underlying theorems to ensure that parts mesh together in a reliable way that scales under load. Some engineers take their learnings from specific projects and write papers, and that's amazing, but that doesn't mean that if you don't have an Arxiv account you're not an engineer.

We're not just fire-and-forget script writers; we're engineers and we draw on decades of an industry's worth of deep knowledge to ensure our systems are rock-solid. I hope everyone who considers themselves "just a software developer" finds a team that appreciates this.


Those safety margin and tolerance calculations are based on existing models (what I called "tables"), combined to form a new whole. What I would call composition.

Whether math is being applied doesn't seem like an important distinction to whether something is composition or not. I reason about computational complexity when I build distributed systems, just like civil engineers reason about tolerances. But I'm still assuming a best-of-class sorting algorithm and a reduction of some domain problem into a shortest-path algorithm. The same they do about I-beams and spans.


The same thing can be said about "computer science", perhaps to an even greater degree. Anyone who comes even remotely close to performing CS almost certainly isn't using anything resembling the scientific method, and the difference between engineering and applied CS is blurry to say the least.

Maybe the vast majority of us are just programmers making the internet go.


Calling it a science does seem to have been an odd decision. It looks more like a branch of mathematics, from the outside at least.

Engineering... I dunno. The digital realm doesn't seem, to me, to have enough of a split between "design" and "actual device," so we can't really split the job into the engineer's task and the manufacturer/builder/etc.

One annoying thing about this the whole "software engineering" discussion is that often casts the engineers as gatekeeping or being snooty and not wanting to include the programmers. It just seems like a different type of job, though, still perfectly respectable and decently prestigious. Nobody gets angry that we can't be called "Software Chefs" or "Software Conductors."


> It looks more like a branch of mathematics, from the outside at least.

It was a branch of mathematics!

https://en.wikipedia.org/wiki/History_of_computer_science


> Anyone who comes even remotely close to performing CS almost certainly isn't using anything resembling the scientific method, and the difference between engineering and applied CS is blurry to say the least.

Actually that's one of the things Sussman noted about modern software development: that you have to "perform basic science" on your framework and library code by running experiments, rather than building up things from first principles.

So maybe computer science will become a science, after all!


I suspect that software engineers are not the only engineers who spend most of their time on tasks that you wouldn't classify as engineering.


Agreed. It feels like fitting pre-cut puzzle pieces. Reading code/documentation has become the most important skill.

The amount of working software engineers who don't understand basic things like paging or the TCP/IP protocol or file systems is unnerving.


What could be the reason though? Too much abstraction these days? I have worked with experienced devs that have a background in Computer Science, who don't grasp basic concepts (especially when it comes to computer networking). Could it be that the emphasis in current CS education is more on theory rather than application in the real world? These days, there are more Software Engineering curriculum style studies popping up at Universities that get rid of the hardcore academic CS fluff you'd only need if you plan to stay in academia, and focus on real world applications. Perhaps that would help more.


theres too much stuff to memorize nowadays... its getting more complex every decade. e.g. If you have to load up in your brain XYZ security details about a certain cloud provider, and also understand caching, scaling, load balancing etc in a cloud environment, why would the person actively keep their 'OS paging' knowledge fresh the mind, its wasted real-estate. maybe certain developers with photo-graphic memory can do it, where they never forget _anything_ they learn. but most developers I work with have forgotten what they haven't used in 5-10 years


My point is that many software engineers don't really understand things you mentioned: load balancing, caching, scaling, etc. in depth and proceed with a high-level understanding to use tools that implement the low-level logic without fully understanding it. I'm not saying it's all of them but a significant number of them use the black-box approach.


If you haven’t already, you should read the crossover project by Hillel Wayne in which he interviewed a bunch of engineers that became software engineers. It’s really good. https://hillelwayne.com/post/are-we-really-engineers/


I can't agree with that and think it's too simple of a declaration. I have seen very high level code that is well architected and follows engineering discipline. Whatever is underneath it is no different than what separates an ICE and the buttons you click to change the tune of the engine.


Engineering involves designing within constraints. That's not a thing that makes software "engineering" different.

https://www.teachengineering.org/populartopics/designprocess


It's a mix of composing and plumbing. I wouldn't call it architecting unless there were enough degress of freedom that weren't locked down by earlier choices and designs are over quite coarse operations.


What do you call architecting?


It's akin to an elementary school craft project as well. "When in doubt, use more glue stick." UI design ends up being cutting out more and more interesting shapes from construction paper.


There’s the term software craftsmanship which comes a bit closer to the truth.

So maybe “software engineer” should be deprecated in favor of “software craftsman^H^H^Hperson”. ;)


Writing, perhaps?


I think the emphasis on the documentation is a bit of a red herring. Sure, documentation for most things could definitely be better. But when you are building a system that is a composition of subcomponents, the bulk of the complexity is almost always going to come from the interactions. Those aren't things that can be readily captured in the documentation of any single component.

This conclusion is actually pretty obvious when you consider it from a testing/dimensionality perspective. If you have 2 components with N inputs, an interesting application using both may have close to 2N inputs. Sounds pretty tame, but the size of the space grows _exponentially_ with dimension. And for a bounded space, almost all of the volume will be near the surface, which would mostly correspond to non-trivial combinations of the inputs. I think we tend to lump most of the code needed for this complexity into "business logic".


Ah, the variant of classic 'code documents itself'. Which is true unless you come to a dark place where stuff stops working randomly and you have no clue why. Or you interact with other systems which are just blackboxes for you.

Its fairly trivial to write on few lines piece of (bad) code that is really hard to grok for anybody else and even for you after few months / years.

What good documentation does, and it can take just few lines here and there, is explain why things are supposed to happen or not, expectations, corner cases etc. Haven't heard any good argument against that in past 20 years and doing this myself diligently even on code that is purely for me. Always thanking myself when debugging something old.


I didn't mean to diminish the value of documentation. It's hugely valuable. My point was that even if the documentation for all systems was good, the bulk of work in a mature software environment would still be integration, since that's where most of the complexity emerges.


You need well-defined interfaces to be able to reason meaningfully about the interactions. That usually requires comprehensive documentation, and for the implementations behind those interfaces to actually stick to the documented interface contracts.

A sibling subthread notes that it has become more of an experimental science than a mathematical science. That is getting at the heart of the matter. You lose the ability to reason about the interfaces; instead you have to poke a stick at them and see how they react, and build an incomplete and often inconsistent mental model based on that, instead of being able to rely on the interface and its documentation to provide you with a comprehensive mental model that is all you need to reason about and predict their behavior.


Well-defined and well-documented are usually orthogonal properties.

You hit the nail on the head with mention of contracts.

We need wider adoption of tooling like Pact. (Ironic link to Pact docs follows.)

https://docs.pact.io


Well, if it’s well-defined but not documented, then you can’t tell the definition and can’t judge wether it is well-defined. If it’s documented but not well-defined, then that means that the documentation is incomplete (there is “undefined” behavior) and/or inconsistent. Having a complete and consistent definition (which is what I would call “well-defined”) is almost the same as having a complete (describes all behaviors) and correct (the actual behaviors match the documented behaviors) documentation.


The system should have its own documentation though. It's fine if that documentation is basically just explaining all the interactions (especially if that's the bulk of your system). But that doesn't mean the individual components shouldn't have good documentation.


This seems so Twitter. No link to a resource that would even mention a study that says it is 90%, rather than say 10% .

Ok it might be that 90% of YOUR work is integrating poorly documented APIs, then why don't you say so. Or say "I think it may be the case that ...".

This bothers me because I've seen similar Twitter posts being discussed here where the Twitter post is clearly the writer's opinion but presented as a numerical fact.

Is it that people who communicate over Twitter think their opinion is the truth therefore no need to present any (links to) evidence? Does Twitter make people dumb, or just attract dumb people :-)


Is it being presented as a numerical fact? I think it's actually fairly clear that this is just the author's opinion and is not meant to be taken literally. In general things I find that signify this include using very round numbers, mentioning it as the title in a longer piece that is clearly just opinion, and not including wording like "research has shown" or "we know that."

That said, it would be great to get some actual data on this. Seems kind of hard to measure though -- while my personal projects do indeed have a good amount of API integration work, I'd find it hard to exactly say what percentage. And how are we even defining an API? Are we including eg a language's standard library?


This is simply untrue. It’s not even true in spirit. It ignores all UI programming, game development, and line of business code.

This might be true for an extremely narrow band of startup and consumer facing web programming.

It might be true for the generally low quality programmers getting churned out of Austen’s bootcamp, because they can’t do much more than the stated activity.


Every time I got to do anything interesting in an UI, I have had to dive in to read the source code of the library I have to appease. Documentation simply has not been available, explicitly marked as non-existent, or in the more hilarious cases, was outright lying.

Maemo media player? The only way to get results was to commit multiple layer violations and manually chase through at least two chains of d-pointers. (Opaque structs that are explicitly not documented.)

My own poker tracker? Welcome to Xlib internals. Mind reels at the memory alone.

Other phone software? Deep insides of Android source tree, coupled with Qt, and always - ALWAYS its -private headers.

Wayland browser? Dissect the insides of Glib, GTK+, WebKit, Pango and Cairo.

The list goes on and on. As far as I am concerned, UIs are the blighted forest where no sane person ventures alone, and few come out sane.


It can quickly spiral out of control. My best advice is to just hold back. My experience is that most people don't care about a fancy UI they just want something that works for them.

If you can build this from common basic components it should turn out ok. I think the trick is discovering what your application needs. What kind of interface it should be and what goes where. The it's mostly the same thing over and over again until it's not. Then this exploratory phase starts all over.

I like to draw experience from game engines. They typically build success around a few key components that have well defined constraints. As long as the right constraints are in place it's possible to make a fun experience. If you have the wrong constraints and you find yourself constantly battling the system then stuff is all wrong and you need to find a new approach.


> It ignores all UI programming, game development, and line of business code.

It is certainly not the case that UI programming and LOB code categorically does not involve integrating poorly documented APIs.

I'd be surprised if that was true categorically of gamedev, either.


UI programming - let's take websites - the DOM can be considered an API.

Game development, using literally any pre-existing function can be considered an API (libraries are APIs).

This really boils down to saying - the average line of code is used many times. How many, I guess, depends on how poorly designed the APIs are.

If the average function takes longer to use than it did you write (I'm skeptical) - then it would be <9 usages per line. If it takes less time to use a function than to write it - then it would be >9 times.


It's just someone trying to find the next edgy tweet to please their followers. Making such broad generalizing statements in a few characters is rarely successful.


same can be said of the parent's comment. just say something contrarian and not back it up.


you haven't worked in the industry long enough if you haven't encountered poorly constructed/documented API integration from vendors.

the ideal world you described does not exist on this planet.


It's APIs all the way down to the ones and zeros.


reminds me of what Sussman said re MIT moving from Scheme to Python for 6.0001

> Nowadays you muck around with incomprehensible or nonexistent man pages for software you don’t know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job, and it needed a different course.

https://www.wisdomandwonder.com/link/2110/why-mit-switched-f...


it really is becoming more like an experimental science than a mathematical science.

it's become easier to make experiments instead of deducting things from first principles... What's really surprising, is that everything about the program is constructed from first principles!

It's kinda like if math became 'math archeology' i.e. imagine if maths were really about trying to understand clay tablets and papyrus writings in order to do calculations.


>>it's become easier to make experiments instead of deducting things from first principles...

Is this really suprising for You? I mean I always believed that our deduction skills as humans were really limited and primitive. Just look at conveys game of life or any game as a matter of fact (chess?). The emergent properties of those games with few basic rules are so elusive that we need generations od geniuses to work on them to discover anyhthing beyond simplest things. How can we expect anything better when the rules gets really complex and fuzzy (as in real life)


I guess we're all the Mechanicus of Warhammer 40k now


We're becoming more and more like the Adeptus Mechanicus from Warhammer 40k. The secrets of the technology we are using lost to time, using "prayers" in the hopes one sticks.


Reminds me of this talk by Johnathan Blow that I have come to appreciate[1]. Perhaps I am an odd case, but I seem to be stumbling onto bugs very frequently.

[1] https://www.youtube.com/watch?v=ZSRHeXYDLko


I've thought this as well. Trade "compliance" and "industry best practices" for "tech heresy" and boom, we're well on our way. At least at certain shops anyway.


Or the Zentraedi from Macross, who knew how to operate but not how to maintain their battleships, mechs, and other equipment (purposely done so they would maintain loyal battle-thralls to the Protoculture).


Dang, they got rid of Structure and Interpretation of Computer Programs? I notice the URL for their course site still has the 'scip' in it: https://sicp-s1.mit.edu/spring22/information

Makes sense though I guess. Python is also use a lot in machine learning stuff and that appears to be where a lot of stuff is heading now.


Funny, I thought of the same quote. I loved watching their old MIT lessons and reading SICP.


If you phrase it that way, you make it seem that somehow the poor documentation has anything to do with it. It does not and thinking that wat leads to horrible mistakes like HATEOAS.

90% (not really, but some arbitrary and probably admittedly high percentage) of software engineering is integrating API's. You could state that as a problem but it's not really. Integrating API's is just a highly effective thing to do as a programmer. Consider the alternative, if 90% of software engineering is writing things from scratch that likely already exist elsewhere.

Integrating API's is also not a trivial sort of cost that we need to eliminate from our daily work to be more efficient. Integrating API's requires the same sort of deep thought and careful consideration as do the other types of programming we do in our profession. Sure it could be made easier often, but really the part where we have to figure out the finicky details of some API is not the hard part. The hard part is figuring out if and how we can make our use the API within our requirements. That doesn't change even if all API's were some click and drag Lego block system.


How many of those APIs are actually wrappers around some database tables? This could be replaced with ONE API, some access controls, and a published schema.

That is until someone makes a table of "issued API commands" and then we've come full circle with more complexity.


You might be interested in thin.dev (no affiliation)


> This could be replaced with ONE API

GraphQL?


All I can say is thank god most of the stuff I work against has the source published on Github, and most of the rest is in decompiler-friendly languages.

This leaves the completely inscrutable and mostly undocumented REST apis that I've got to trial-and-error at, and always ask whether I did something wrong, the other party made a breaking change, or its just a Tuesday and the infrastructure is derped up.


Nah, that's just 20%. The other 80% is dealing with the Scrum Agile bullshit.


It's ‘Agile’ as in ‘People's Democratic Republic’.


There should be a web of 100% JSON^1 endpoints. Let users choose their own clients. The JSON could be used in any way the user chooses. Maybe they want to make simple HTML from it. Or maybe they want to embellish it with remote Javascript-powered interactivity. It could be their choice.

What I see now as an ordinary end user, not a biased web developer, is myriad websites that are mostly just remote Javascript-powered shells for generating requests to JSON endpoints (and, via remote Javascript, doing nasty stuff with advertising and tracking). As someone who prefers text-only, more and more, albeit slowly, the "real" web is becoming JSON endpoints, not the HTML pages with links to remote Javascripts and no useful data.

To me, "poorly documented APIs" sounds like a euphemism for people hiding public data behind a gauntlet of opinionated and manipulative design geared toward optimal advertising and tracking. They are not focused on providing simple, direct access to public data. They are more focused on dictating "UI" and "UX" and forcing acceptance of advertising and tracking.

Overall though, I think the rise of endpoints is good. Hopefully, in time, the Javascript and other remote, unwanted cruft will fall away. Maybe someday the web will become a more efficient means of sharing public data. In the meantime, we have Gemini and other alternatives.

1. Or a better text-only format.


Seems very difficult because "attention stealing" companies extract most of their value from doing nasty additional stuff in javascript or through cookies.

Yes, would be great if alternatives like Gemini grew, I would be willing to lose a lot of "features" of web browsers if I could be sure about what I am exactly doing when I connect to a server, what exact exchange of information is going, and which are the parties involved.


Personally, I share your preference. But we are an incredibly small minority, and to build things that actually help a lot of people (and thus earn a lot if money) we have to have empathy to ordinary users.


The important idea here, IMO, is that users are not effectively restricted to choosing from amongst a small handful of clients issued by the same companies that sell online advertising services or otherwise profit from user data mining, online ads and tracking. IMO, it is this effective control over the client that is the root cause for so many of the complaints HN commenters have about the www today and the direction it is heading, namely because such control over the client itself has become a necessary in the user-hostile strategies of these corporations. They are now relying on that control.


One idea I have is <link rel="data"> (or <a rel="data">), if your web page is a data-oriented web app. You can include it in your own web pages if it is appropriate to do so. (This way, you can have a default view but allow the user to choose their own instead if they wish to do so. Comment 31228726 mentions they have to have empathy to ordinary users; <link rel="data"> will allow it to work both ways, I think.) (It could use any format, whether JSON, CSV, SQLite, etc; you can indicate this as the type attribute. And, you can have more than one, as different types.)

For API documentation (or data format documentation), you can have <link rel="documentation"> (or <a rel="documentation">) too; if you limit it to a simpler subset of HTML or to plain text, then it can be easily read (or printed out, if wanted) in simpler viewer programs too.

I have other ideas too, such as <html application="...">; you can use this to identify instances of a web app that may be on multiple servers; this can be used for user CSS and for other purposes (e.g. substituting native code if the client has it in order to improve efficiency of computation, or using software that the end user has installed (or has written and/or customized) specifically for this use).

And then there is the "feature" attribute, to indicate that it implements a feature that the end user might already have in which case it can be ignored (I had written a list of some of these, e.g. feature="usercss" and feature="uploadname").

Another idea I have is a Interpreter header, but that will require that web browsers implement it (unlike the other three things above, which are backward compatible). This can allow to serve any file format and it can be displayed in the modern kind of web browsers while simpler ones could also display them in their own way; it can also reduce the number of requests required for some things, improve caching in some ways, allow polyfilling video/audio codecs in a cleaner way, etc. (It could also allow a user to install command-line programs to convert a file if you are using WebAssembly with only a small subset of the Interpreter API.)

There does need better web browsers written, too. Half of the things will need to be changed (often, implemented differently than the specification says; e.g. do not allow enumerating devices but if it is requested, the browser might say request audio input enter a system command to produce the audio (which might be one that converts a audio file to stdout, or that converts audio from a microphone to stdout, etc)); better way is a way that is meant for only advanced users, assuming that you know what you are doing and better interoperability that user can specify UNIX pipes, etc, without trying to stop you from doing things; another thing that would be helpful is the ARIA view mode. Some things were also mentioned by FSF, such as that the end user should be allowed to substitute their own JavaScript codes in a web page; I think that it is also a good and necessary idea. The web developer console is already a good end-user-oriented feature, but by itself it is not good enough. Having other web browsers of course does not mean that all users will have to use it; we will need to have the choices and controls!!! (The ability for the end user to write native extensions in C (and run them as native code) would also be helpful; WebExtensions are too limited, and might not be as efficient as a C program.)

It would seem needing to write a new browser in C, but it might be possible to modify existing ones (Netsurf seems the best idea at this time but it might not be good enough), but that would require changing half of the program, and might need to maintain and pay attention to changes to decide if you will include those changes, and to get them to work.

Furthermore, other people can make providing the API endpoints for many things even if they do not have, and to be not too complicated, that then you can use your own. When there will be a lot of them, then you may be able to access using SQLite virtual table extensions (even accessing two sources at once in a joined query, if wanted), Gemini, Telnet, Gopher, JSON, CSV, NNTP, IRC, etc. Even could be made the program that supports such feature too.


It was always like that though. It’s just now “API” means “an HTTP endpoint serving JSON”.

And before it was SOAP.

And before it was MS COM.

And before it was some mainframe app in COBOL.


Don't forget CORBA. Actually on second thought, please do.


>And before it was some mainframe app in COBOL.

Yeah but at least you could speak with that guy over phone...for the next 3 days, and often even get the source-code aka tapes ;)


As a notable side point, much actual mechanical-related (and other) engineering is integrating poorly documented physical parts. That software engineering is finally getting to the point where you open up the part catalog and pick your parts is a sign of maturity in the industry.


95%+ of SaaS engineering seems to be some form of that. I've been sick of it since day one but it's where the big bucks seem to be.


I suspect this is true, but it's worth noting that often we're in control of poorly documented API A & B in the first place, and could probably have avoided the much of the pain of later integrating them.


Good APIs are incredibly hard to build because your always fighting upstream against the feature farm.


I remember doing my first electronic related projects using discrete transistors and other elements. Now it is mostly microcontrollers, integrated circuits and blocks. Which is a very good thing. The thing I could do now in a matter of days would take months / years to implement using old approach.

Poor documentation is bad of course but a fact that one can assemble sizeable project integrating various existing APIs / libs holds huge value from a productivity standpoint.


Silly tweet. Most spent building UI’s and business logic.


Ask yourself, why do you need programmers to build UI's? Poorly documented API's, there is no reason you'd need programmers to build UI's otherwise. Either the API the UI is working against is so poorly documented that only a programmer can understand what to do with it, or the UI tooling itself is so bad that only a programmer can understand it.


Austen runs a coding bootcamp. I'd bet this is true for his students.


Isn't UIs at least on Web essentially integrations towards APIs? Which are poorly documented...


I'm currently working on a communications product that spans a lot of on-device APIs to use them in interesting ways and in novel combinations.

Part of the challenge is that complete documentation of behaviors is very difficult. Some behaviors depend on OEM and MNO choices. Some depend on drivers and hardware. They all depend on user choices for enabling various radios on a device. All of these choices can shift over time, and not just on OS updates. Even running hardware/firmware changes on a particular model can change behavior. Many of these things will appear in different combinations on new devices and different user environments.

Internet APIs have fewer excuses, so challenges are more annoying in some cases. But it's always challenging to build apps that are resilient to loosely specified behavior. Such is life.


And IME 50% of that is dealing with generic error messages, or worse completely silent errors.


I would argue that we shouldn't be writing APIs at all and should be writing constraint and goal oriented programs which can be remotely tested for resource utilization. Basically, we need to stop thinking about reinventing the wheel just because we don't like the wheel we're given and figure out how to adapt it to our uses in a manner that doesn't require explicit human written code. I know this all is just wishful thinking on my part, but it would be nice to just have programs just be a set of constraints and goals as I mentioned early and then have the compiler and runtime figure out the details of execution on its own.


There are two ways to overcome this misfortune. One, work on product features. Yes we'd still integrate poorly documented APIs, but we'd innovate on solving new and touch problems, and our technical challenges come from managing complexities and building new concepts. Two, go deep and focus on productivity. Yes we'd still integrate poorly documented APIs, but along the way we identify opportunities to build something that is meaningful and makes our lives better. Our technical challenges come from getting unusual insights to optimize something seemingly impossible.


Skimming through the comments, since when does API equate with HTTP API? Aren't we always integrating some API? Standard library anyone? What is poorly documented anyway? Put it another way, what is NOT an API?


I guess the interesting question is the ratio of “glue” and layout code to business logic and algorithms. There’s definitely too much glue (and somewhat relatedly, too much branding and unique “design”) in software, IMO.


Isn’t that what software is? Framework APIs, HTTP APIs, Hardware APIs, SDK APIs. Is the tweet author suggesting crud or web APIs only then? I think it’s always been a meme that most software work is glorified CRUD.


I read that comment as: Nobody is hiring technical writers to do documentation. Side question: Are companies even hiring technical writers, or is it something most budgets don’t account for?


Most companies - no. There is very little tech writing and of that, it's of very poor quality. It's now normal for API documentation to be generated from the definitions, with a few explanatory comments thrown in (eg. see the Android API doco.)

I've even seen this regarded as a good thing! The software makers don't "waste" time, and they've created an "opportunity" for 3rd parties - an opportunity for someone to write a "Dummies guide to XXX".


>Are companies even hiring technical writers?

In more traditional tech businesses, where customers expect proper documentation, YES.


The networking industry hires a ton of tech writers to write manuals. Admittedly the quality has dropped like a rock over the last 10 years.


Programming is an onion, it has many layers of abstraction layered on top of each other.

You can work at any level you want but it helps to understand how the onion peels.

In many senses there are programs that write programs and in that regard all of our jobs are automatable.

The higher levels are closer to "no codification" than the lower ones at this time. Remember when we had web developers writing HTML? Now it is JavaScript.

As you learn to code together APIs, use frameworks, take the time to peel the onion and learn how it all works.


Google SmartLock/Onetap was probably my worst ever experience. Facebook login, while documented, was also terrible due to all the errors and bugs.


True or untrue in the same way as saying 90% of art is not new. Or many things.

Almost everything humans do isn't actually novel. It's remixing things we've done before, overwhelmingly in almost-but-not-quite-exactly the same way as many others.

tbh I think that's fine. But it can be a bit depressing when you first realize it about something you care about.


Hire API technical writers. Involve them in API design processes. A developer portal should not be an automated afterthought.


I work at a business thats in the business of connecting APIs. Id say the bulk of the work is the tooling around these in a generic enough way where it makes sense for our processing. IMO this is a fun thing to work on.

Some dev frameworks on the other hand that turn 4 lines of CORS directives into 5 lines of library code + 14 dependencies annoy me.


Documentation can be like going to college and university and what I mean by that is it can blinker your thought processes.

The best way to learn is to see what errors api's & language functions throw at you.

Then you know where it wont work!

The people who docs typically also dont code, but those that do code and document tend to produce the best docs, but its not a given.


IMO this whole thread hinges on how one chooses to define an 'API.' You can certainly make it true if you define it broadly enough.

0. Drag-and-drop / visual programming tools as APIs

1. Libraries/frameworks as APIs

2. Standard libraries / built-in abstractions of a programming language as APIs

3. Programming languages themselves as APIs

4. IDEs/editors as human-machine APIs

etc.


100% this. It's entirely impedance-matching rather than building anything worthwhile for 90% of the day.


When I saw this I thought about what S. Jobs said once: something like “We think about problem and work out the tech for it backward”. If the best solution is to use some APIs then go for it. The focus should be on the problem. We like tech we are using but these are just tools like any others.


... to 0% documented enterprise applications. But, you know, it's "clean code", right?


http://langsec.org/

Interfaces are more important than implementations.

That the "implementation" are just duck taping other interfaces together proves the point!


It’s the other way around. Implementation is and always has been everything. Performance, security, reliability, efficiency - these are all implementation aspects where the same interface can cover a world ranging from total crap to excellence.

Langsec is something I strongly agree with btw, but it’s not interface-supremacy and the world doesn’t look like that.


I'm not disputing the implementations are more lines of code than interfaces.

What I am saying instead is interfaces and composition needs more attention. Shitty APIs are a lack of attention to interfaces. Shitty marshalling ultimately stems from a lack of attention to competitionality, and the cost is that people are incentivized to commit anti-lang-sec sins.

> these are all implementation aspects where the same interface can cover a world ranging from total crap to excellence.

This is narrowly true, but completely misleading. You have to focus on incentivization and emergent behavior.

Good interfaces can be used safely by lazy people. Bad interfaces will never be used will by the most careful people.

Every implementation tests in interfaces below it, and so quality will percolate upwards.


Almost perfect, just need to remove that extraneous "today".


Microsoft's poor Azure documentation is keeping me in business.


Sure, that we all know.

Go ahead and answer why and what you're doing about it.


...And Austen Allred's Lambda School (or however they rebranded to escape the terrible publicity) will 100% teach you how to do that and nothing else, for $30,000.


Today? You don’t think that when people were writing kernels back in the day that they were not dealing with CPU “APIs”? Etc etc.


big reason for success of nocode tool space imo -- there's no standard for full stack plugins unless you're using django/rails in like, 2013 with no frontend JS and no external services

API doesn't mean 'drop-in compatibility', whereas nocode is designed to integrate with everything in your stack with relatively low lift by the buyer


Node js recreated the dll hell in the browser


Responding only to the title: not true in my case. In my case, 100% of software "engineering" is this.


70% integrating APIs thanks to SaaS.

30% fiddling with yaml files thanks to modern devops tools.

The remaining 0% is real software engineering.


DRY has a cost. And that cost is things like this,and things like rigid frameworks, rigid libraries, etc.


Currently working on "the code is the documentation" project and it's a fucking slog.


This is why I still use LDAP. And I spend my days at work happy.


Well, it certainly will be if you hire his graduates.


I'm part of the remaining 10 percent.


No true Scotsman.


I agree 90%.


I don't resonate with this at all, and think this attitude is toxic. To take the engineering analogy: this is like reducing civil engineering to "90% of building a bridge is making shapes out of cement of varying quality". How you use cement of varying quality to build a bridge up to spec, that's the engineering part, and it's hard. Cement will always be of varying quality, because it's not unobtainium that you can mold into any bridge you want without effort.

Noawadays, I am in the business of writing CRUD e-commerce applications, using PHP, wordpress and jquery of all things. In the past 20 years I've worked on anything from bare-metal embedded to building search engines and doing "decent scale" data engineering. I'm basically dividing software engineering in two broad categories.

One is what I call "copying bytes". You get some business relevant input, and as a software engineer, your job is copy those input bytes to their business relevant destinations. For example, you receive an order from the customer (line items + shipping address), and you have to copy those line items and shipping address to the fulfillment department, recording a few log entries along the way. You might calculate some tax or apply a coupon along the way, slightly modifying those bytes, forward a few tracking metrics here and there, but roughly, you are just wrapping these bytes making up the order in fancy ways.

APIs for "copying bytes" are one of:

* "if you call readBytes, I will return someBytes",

* "if you call writeBytes, I will store some bytes somewhere"

* "please tell me when someone calls writes some bytes"

* "hey, someone just wrote some bytes"

The other is what I call "inventing bytes", or "transforming bytes in really complicated ways". This is the realm of funky algorithms, data science, mathematics, complicated stuff. This is taking an input 3D model and outputting some CAM instructions. This is taking a search string and computing features to quickly locate relevant information in indexes. The coding part of this style of software engineering is the tiniest part of the actual undertaking. Often, the hard thinking is done by people at universities or research labs. One could wonder if they're software engineers at all, and not "3D model transformation engineers", or "search index data structure engineers", as often their code needs to be rewritten by someone who knows how to do it pragmatically (say, port from the ugliest hacky matlab to java). The APIs here are weird, and frankly I don't really ever deal with them, so can't say too much as to their quality.

Chances are, you fall into the "copying bytes" category. Which bytes to copy, when, where, what log entries to add along the way, how to scale the copying, how to ensure the copied bytes are transactionally aligned with other copied bytes, that stuff is complicated. But it's turtles all the way down, one copyBytes API upon another, until you hit the hardware, and boy are the hardware APIs to copy bytes utterly nonsense.

You might think it's inane. You learned how to balance trees, and how to schedule processes, and how to do some differential curve geometry, how to write compilers and how design database engines, how to implement distributed consensus, and why don't you get to write all those beautiful algorithms as part of your day job? Thing is, clever people have already done so, and often they made a business out of it so that you can pay them for their clever thinking, hoping that they were indeed more clever than you, and save you time.

If you are not in the business of thinking clever ideas or making pretty things like video games, but in the business of making money, you will probably just write code to copy bytes, using code from other people that also copies bytes, until those bytes finally reach the rare unicorn that makes better bytes out of them. (here's a secret: if you do adtech or e-commerce or banking or embedded software, that actually doesn't happen. The business value of your software is to copy bytes to the right destination at the right time).

People want to transfer data for all kind of different reasons, in different ways, with different constraints. That means that even the best written API documentation will have to be generic enough to cover what was intended, and with some chance, what people are using it for. Most probably, it won't fit your use case to a T, and you call it "poorly documented" when really it means that people have different problems than yours. I actually think API documentation is insanely better than it ever was. All the APIs I had to deal with lately come with copypasta for 15 languages, a snappy searchable index, a plethora of tutorials and articles and videos.

But here's why I find my job copying bytes with subpar APIs singularly fulfilling:

Copying bytes is not easy, especially in today's distributed world. Doing it well and reliably is a tough problem. Figuring out how to do it reliably, securely and at scale requires a lot of thought. Figuring out which data to transfer to whom needs business understanding, knowledge of people and systems thinking. It requires planning a project, managing teams, dealing with changing requirements. It requires planning for maintenance, keeping a codebase healthy, mentoring other engineers, aligning with stakeholders, dealing with legacy code.

The APIs I ultimately use are the least of my problems. Some are nice, some are terrible, some are really problematic, but they don't have much to do with the "engineering" part of my job. My engineering is drawing pictures with boxes and arrows. The boxes are other people's systems, the arrows are other people's APIs. What arrows I draw and where they point to and what the little bubbles along them contain, and which boxes I chose to use, that's what I do as an engineer.

As a concrete example of a hard problem, fully implemented by glueing together shoddy APIs: fulfilling an order on an e-commerce store.

* seamlessly show payment process and checkout status to the customer using a slick UX in their browser (APIs: some shoddy REST, some janky web socket or whatever, some hacked react and some dubious CSS. Ultimately just shoveling the status code from the payment processor to the user's browser, potentially adding some curly braces and escaping a string or two.)

* log the transaction and provide metrics and analytics to business and ops (some inane pixel data, some putrid tracking JS, some shoddy SQL, maybe some janky JSON going to a SQS queue)

* process the payment information, order information and display it in the customers account section / backend CRM / fedex (I'm not even going to list how subpar the APIs are here)

* transform the shipping address into a proper shipping label with barcode (have fun with zebra and ZPL)

* dispatches the line items to the required fulfillment centers, provides picking information to the warehouse workers, with tracking, metrics, inventory handling, ...

Designing something like this so that it works correctly, efficiently, with visibility, reliability, scalability, and fulfills business needs, and sparks joy for every human involved. That's engineering.


Youtube, thanks for all the fish


That’s like your choice man.

Use services with well-worn paths. Or build them yourself. Make more sustainable architectural decisions and you should be able to get that 90% (lol) number much much lower.


That's a pretty big assumption that the developer gets to choose their tooling and APIs. A lot of times you don't have the choice and just have to use what the other 99% of the codebase is using, even if it's terribly documented.


I'll try to do that the next time I'm going to create a banking product combining the services of the reserve bank, the tax authority and the stock exchange in my country of 4 million people!

Well, I suppose it's a bit of a choice to work on that kind of product, but an awful lot of economic activity stems from that kind of product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: