Hacker News new | past | comments | ask | show | jobs | submit | w3news's comments login

When you build an API, please start with the OpenAPI specification, before you write any code for your API. It can be iterative, but for every part, just start with the OpenAPI, and think about what you want from the API, what do you want to send, and what to receive.

It is like the TDD approach, design before build.

Writing or generating tests after you build the code, is the same as this. It is guessing what it should do. The OpenAPI specification, and the tests should tell you what it should do, not the code.

If you have the specification, everyone (and also AI) can write the code for you to make it work. But the specification is about what you think it should do. That are the questions and requirements that you have about the system.


I get the feeling you may not have gone 0-1 on an API before. In general, you have 1 consumer when you're starting off, and if you're lucky your API gathers more consumers over time.

In that initial implementation period, it's more time-consuming to have to update a spec nobody uses. Maintaining specs separately from your actual code is also a great way to get into situations where your map != your territory.

I'd instead ask: support and use API frameworks that allow you to automatically generate OpenAPI specs, or make a lot of noise to get frameworks that don't support generating specs to support that feature. Don't try to maintain OpenAPI specs without automation :)


How is adding 10-20 lines, depending on how many structures you're creating, and then re-running a generation tool (or simply just running a build command again depending on your build configuration) time consuming? I've written OpenAPI-first services both at Big Tech for services handling crazy amounts of RPS and at tiny seed startups where we release the API and literally nobody uses it but our app. Sure I've run up against the occasional sharp edge/incompatibility with some form of nested structure and the generator we used but it was usually a minor diversion and represented 20-30 min of wasted time for the occasional badly-behaving endpoint.

I'm even writing a side project now where I'm defining the API using OpenAPI and then running a generator for the echo Go framework to generate the actual API endpoints. It takes just a few minutes to create a new API.


They are advocating for exactly that: "I'd instead ask: support and use API frameworks that allow you to automatically generate OpenAPI specs


I don’t agree with you. Write a spec, use generators to generate your servers and clients, and use those generated objects yourself.

The point is twofold: you test your API immediately AND you get a ton of boilerplate generated.

So many products out there just feel like a bunch of separate things with a spec slapped on top. Sometimes the spec doesn’t make sense. For example, the same property across different endpoints having a different type.

Save yourself time and do it right from the get go.

> Maintaining specs separately from your actual code is also a great way to get into situations where your map != your territory.

So yeah, write your spec once and generate all servers and clients from it…


I agree with this as well!

OpenAPI spec seems intended to be consumed, not written. Its a great way to convey what your API does, but is pretty awful to write from scratch.

I do wish there was a simpler language to write in... JSON-based as well that would allow this approach of writing the spec first. But alas, there is not, and I have looked a loooot. If anyone has suggestions for other spec languages I'd love to learn!



oh thanks a lot for sharing - I was looking for something just like this! Something like this + hurl is the perfect combination to sketch out APIs imo


Simpler than YAML?


OpenAPI specs can save weeks even on small projects, when you need to autogenerate multiple clients in different languages after the API part is ready btw


I don't want to write OpenApi. Yaml is a terrible programming language, and keeping it in sync with actual code is always a nightmare.

I've been using a tool to generate OpenApi from code, and am pretty happy with that workflow. Even if writing the API before logic, I'd much rather write the types and endpoints in a real programming language, and just have a `todo` in the body.

You can still write API-driven code without literally writing OpenApi first.


You can write an OpenAPI spec in JSON. You can use Jsonnet to generate your spec from whatever input you need.


JSON is a different kind of yuck to have to author by hand, especially in the volume an api spec tends to be.


I don't like writing structured formats by hand either - at some point, you either need to, as they say in France, split or get off your seat

Either don't write it by hand, i.e. use a generator for the structured format, as the comments advocate for and article is about.

Or, just say you'll never have a spec.


Yes, so use Jsonnet or generate it from some intermediate representation using some alternative method. What’s the problem?


Why though when I can just generate it from my actual code and not have to maintain two copies of my api spec?


Jsonnet introduces another (flawed) language?


You are correct about YAML, but OpenAPI is not YAML -- it just commonly uses it as for the textual representation. As others mentioned it, JSON is an alternative, although it doesn't make it much easier to write the code directly.

Sadly, there is a distinct lack of tools to make spec-first development easier. At the moment, Stoplight [0] is the only game in town as a high quality schema editor, but it requires payment for any more significant usage.

[0] https://stoplight.io/


Absolutely, and yes YAML is trash.


Which tool?


I'm thinking using this tool, and having your test suite run through it might work?

At least for people comfortable with doing test driven development.

Write your requirements for your API-driven code as tests first, then document those APIs by running the tests through this tool.


It's going to be very language/framework dependent.

I'm using aide for a Rust/Axum server: https://github.com/tamasfe/aide


100% agree with you... taking the time to do/go design first greatly improves the quality of the final API...

But as some comments below point out, an OpenAPI spec is a pain to create manually which is why TypeSpec from Microsoft is such a great tool. Lets you focus on the important bits of creating a solid API (model, consistency, best practices) in an easy to use DSL that spits out a fully documented OpenAPI spec to build against! see https://typespec.io/


What’s wrong with designing an API by writing its code? Code itself is a design tool (and usually any decent programming language is a better design tool than YAML)


As someone who documents APIs: it's easy to tell which APIs were designed with intention and which ones were designed on the fly. In part because it's much, much easier to document the former :)


Unfortunately OpenAPI specs suck to write manually.

Generating OpenAPI spec from the server code has always felt significantly better for me.


I completely agree as a general design principle, but I still think there’s a place for the above tool.

Example: I used to work at a place that had a massive PHP monolith, developed by hundreds of devs over the course of a decade, and it was the worst pile of hacky spaghetti code I’ve ever seen. Unsurprisingly, it had no API spec. We were later doing tonnes of work to clean it up, which included plans to add an API spec, and switch to a spec-first design process (which we were already doing in services split from the monolith), but there was a massive existing surface area to spec out first. A tool like this would’ve been useful to get a first draft of the API spec up and running quickly for this huge legacy backend.


The API library I wrote for my last couple projects required the developer to fill in the openapi spec specifics and said spec was the part of the API itself, making it difficult to add something to the API that wasn't also in the spec.

Incoming request params, became validated and casted object properties. Outgoing response params were validated and casted according to spec.

In the end I think it worked really well, and loved not needing to maintain the spec separately. The annoying bit was adjusting the library when the spec changed.

And some gnarly bits of the spec that weren't easy to implement logically.

At any rate, it also made for a similar experience of considering the client experience while writing/maintaining the api.


I prefer going the other direction in practice, autogenerating the spec from the code e.g. with drf-spectacular for Django.


Waste of time imo if you use a framework like fastapi which generates the spec for you


Exactly this, I’ve been a python guy which is apparently not the main language used by most api developers or what? Is there nothing like FastAPI in js land? I do start my APIs by writing the openAPI spec, only it’s written in pydantic inside FastAPI and turns out this also creates the actual API lol.


As a curiosity, how do you feel about languages/frameworks where APIs can be pretty self-documenting? For example, Java/JAX-RS creates pretty self-documenting APIs:

    @Path("/people")
    public class PeopleApi {
        @Path("{personId}")
        @GET
        public Person getPerson(@PathParam("personId") int personId) {
            return db.getPerson(personId);
        }
    }
It's easy to generate a spec for a JAX-RS class because it has the paths, parameters, types, etc. right there. There's a GET at /people/{personId} which returns a Person and takes a path parameter personId which is an integer.

If we're talking about a Go handler which doesn't have that information easily accessible, I understand wanting to start with a spec:

    func GetPerson(w http.ResponseWriter, r *http.Request) {
        personId, _ := strconv.Atoi(r.URL.Path.something)
        person := db.GetPerson(personId)
        w.Write(json.marshal(person))
    }

    func GetPerson(c echo.Context) error { //or with something like Echo/Gin
        id := c.Param("id")
        person := db.GetPerson(id)
        return c.Json(http.StatusOK, person)
    }
In Go's case, there's nothing which can tell me what the method takes as input without being able to reason about the whole method. With JAX-RS, it's easy reflect on the method signature and see what it takes as input and what it gives back, but that's not available with Go (with the Go tools that most people are using).

This isn't meant as a Go/Java debate, but more a question of whether some languages/frameworks basically already give you the spec you need to the point where you can easily generate an OpenAPI spec from the method definition. Part of that is that the language has types and part of it is the way JAX-RS does things such that things you're grabbing from the request become method parameters before the method is called rather than the method just taking a request object.

JAX-RS makes you define what you want to send and what you want to receive in the method signature. I totally agree that people should start with thinking about what they want from an API, what to send, and what to receive. But is starting with OpenAPI something that would be making up for languages/frameworks that don't do development in that way naturally?

----------

Just to show I'm not picking on Go, I'm pretty sure one could create a Go framework more like this, I just haven't seen it:

    type GetPersonRequest struct {
        Request `path:/people/{personId}`
        PersonId int `param:path`
    }
    func GetPerson(personRequest GetPersonRequest) Person {
        return db.GetPerson(personRequest.PersonId)
    }
I think you'd have to have the request object because Go can annotate struct fields (with struct tags), but can't annotate method parameters or functions (but I could be wrong). The point is that most languages/frameworks don't have the spec information in the code in an easy way to reflect on like JAX-RS, ASP.NET APIs, and some others do.


I absolutely hate the approach of scattering routing instructions everywhere via annotations. Nothing beats a router.go file with all the endpoints declared in the same place. Routing annotations is a bad idea that caught up just because it looks clever.

Looking for the handler for ˋGET /foo/{fooID}/barˋ is terrible in a codebase using annotations.


At work they force me to use NestJS. Want to make a new GET endpoint? Find the controller class, add a method, add a get decorator, add an authentication decorator, add a param decorator, add openapi decorators, and if you are feeling helpful, add openapi decorators to every property of every object you take in or return.

I hate decorators so much, just let me use regular data as code.


For the happy path, the Java code works great, but a good open API spec also includes the following:

- examples, they are a pain to write in Java annotations.

- multiple responses, ok, invalid id, not found, etc.

- good descriptions, you can write descriptions in annotations (particularly post Java 14) but they are overly verbose.

- validations, you can use bean validation, but if you implement the logic in code it's not easy to add that to the generated spec.

See for example this from springfox https://github.com/springfox/springfox/blob/master/springfox...

It's overly verbose and the generated open API spec is not very good.


You don't need annotations for descriptions, they get picked up from javadoc-style comments which you should have anyway. Same with asp.net.


You are right, for Spring Boot, the relatively new springdoc supports javadoc[1] as descriptions, which is better than the annotation.

[1] https://springdoc.org/#javadoc-support


your example doesn't look any worse than an openapi yaml spec given how easy/frequently you can reach 10+ identation levels for a trivial spec.

you might be able to add descriptions easily, but expressing types in yaml is much more verbose than in a decently typed language.


swaggest allows you to define your inputs and outputs, and generate docs from them.


I like the idea, think about what you really need, keep it simple.

I also try to move centralized computing to decentralized computing more and more. And that can also be done to let the client do more computing. And also storage, is it needed to store in central, or can the user store it.

Many times people thinking that we should have a central system, with all the truth. But let it be a part of the users.

It makes also the central part simple, and because the central part is simple, it is easier to scale.


So true, we build large complex frameworks, abstractions over abstractions. Try to make things easy to build and maintain. But I think the problem is that many developers that using these frameworks not even know the Javascript basics. Of course there are smart people at these large companies. But they try to make things easy instead of learn people the basics. We over engineer the web applications, create too much layers to hide the actual language. 20 years ago, every web developer can learn building websites by just check the source code. Now you can see the minified Javascript after a build, and nobody understand how it works, even the developers that build the web application don't recognize the code after the build. I love Javascript, but only pure Javascript, and yes, with all his quirks. Frameworks don't protect you from the quirks, you have to know it so you don't make quirks, and with all the abstraction layers, you not even know what you are really building. Keep it simple, learn Javascript itself instead of frameworks, and you downsize the Javascript codebase a lot.


Pretty sure the situation wouldn't change if it wasn't minified.

Recently I had to add a couple of mechanics into sd-web-ui, but found out that the "lobe theme" I used was an insufferable pile of intermingled React nonsense. Returned to sd-web-ui default look that is written in absolutely straightforward js and patched it to my needs easily in half an hour.

This is a perfect example based on a medium-size medium-complexity app. Most sites in TFA are less complex. The delusions that frontend guys are preaching are on a different level than everything else in development.


Too mad that we have to hack our car to customize it. We can reinstall computers very easy, choose the OS you like. But cannot do something on our car. Old cars, you can modify everything, grap your tools, and you can do what you want. Modern cars are too closed, you are too depend on the factory what they allow you can do. Also are modern cars too complex with too many gadgets. Please keep it simple, it is a car, not an entertainment device.


I think it's good to separate the drivetrain from the infotainment in these discussions. Hacking the infotainment is a world of difference in a tesla where you have basically software impacting driving a lot; the tesla doesn't deliver all the power, it's too much. There have been people who have gotten service mode access and disabled traction control etc, many wrecks resulted from spinouts.

On the other hand, the infotainment can be rebooted even while driving. The drive train is much more protected and controlled, for a reason.


I love to build extensions. Such a nice thing they made website source easy to read and manipulate for your own usage, and can even share your modifications to build an extension. It is just like your newspaper, you can write on it, cut precies out, etc. You can do with the site what you want for yourself. The newspaper designed also it how they like, but you can also grap your scissors and pen to change it for yourself.


There is a standard for browser extensions. I build also browser extensions before the standard. So you can build now a browser extension that works in Chrome, Firefox, Edge and Safari. But indeed, you can also use some specific api's for only a single browser. That is really bad, like you build a site only for a single browser. But the base should be compatible. And because you always can see the extension source code, you can modify a version for your own that works well in your browser. (And you can share it again off course)


Good old times, when the web was simple, and more decentralized


ESM is a defined default you find back in ECMA specifications. That is why everybody should migratie. Node.je is also moving ESM to the default. If some systems dont do it, dont use it anymore. That it went i dont use jest anymore, and Node.js has also now a good test runner. (Useful for packages and backend systems, i think for frontend systems with eg React there are better test suits to help with special frontend stuff like the DOM)


That it why i use native functions. No typescript syntax, no jest. For testing i use just the native test runner, and that works great with ESM. Jest has indeed still not good ESM support, you can do some Babel trics, but makes the process too complex. Best is to find an alternative, like the native test runner. Also typescript try some things that are not stable at ecma. It was too soon with the import/export, and dont use the native way. Also commonjs is still the default at typescript after compiling, but Node is moving to ESM as a default. I hope typescript will use more native functions that are already available and dont use their own way.

For me typescript syntax and jest is a no go.

Typescript is useful to check types, but i write just JavaScript with JSDoc, and check it with eslint in typescript modus.


Why not using Node how it is designed, than ESM works great.

For type checking on development, you can do it with ESLint and JSDoc in Typescript modus. You have the same type checking like you have in ts files. You can even import types from typescript files, like .d.ts

Best of both worlds, no transformation of the code, and on development you have some help from Typescript.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: