Hacker News new | past | comments | ask | show | jobs | submit login
Problems with Swagger (novatec-gmbh.de)
192 points by senand on April 4, 2017 | hide | past | favorite | 158 comments



My problem with Swagger is almost the opposite... it solves the problem (APIs are very complicated to use!) by embracing this complexity with more complexity and more tools. Rather, I believe the solution is a push to just have simpler APIs.

It's crazy to me that it's harder to write a Swagger file than it is to write the API itself. And there's a lot of tooling that benefits from Swagger, but... I've found they all work 80%. Codegen, documentation, etc get 80% of the way there.

(Also, OAS 3 has linking, which is very similar to your hypermedia complaint)


Do people write Swagger files by hand? That sounds awful. I was under the impression everyone used tools like I use in Clojure (compojure-api[0] and ring-swagger[1]).

[0] https://github.com/metosin/compojure-api

[1] https://github.com/metosin/ring-swagger


In my experience (we have a product where people upload Swagger files), most people write it by hand. There's two reasons:

1) Many people use Swagger to design (rather than document), meaning the Swagger comes before the code

2) Most people just prefer to write it out, since the tooling otherwise isn't necessarily great (especially for non-developers)

We ran a poll and "by hand" (either on Swagger Hub/Apiary, or locally) won by a landslide.


How does a non-developer know how to document or design an API?


Perhaps the first pass of the doc is written by a developer but it is later updated by a tech-writer, or you need to produce a translated version.


I do and have, yes. We generally push for design-first specs, which means there's nothing in place to generate the Swagger file yet. If the API is simple, as they should be, then it's not that awful, really.


I didn't realize Swagger was meant to be an intermediate format.


It's not an intermediate format necessarily, but it reflects information already encoded in your route definitions. It seems silly to maintain that information twice in two different locations.


Hence the reason for tools like swagger-inflector http://swagger.io/writing-apis-with-the-swagger-inflector/ that allows you to drive routing directly from the OpenAPI definition


I've implemented Swagger with several APIs and agree that it's crazy complex and time-consuming to write Swagger files manually.

I believe the best use-case for Swagger is to develop the API (perhaps just defining the routes with payload and response, but without controllers), and then auto-generating the Swagger files. This way the API consumers always have an up-to-date documentation, and there is only one place which represents the current state of the API.


Swagger yaml is hard to write? Hmm.. relative to the code that services it or consumes it, I find its pretty trivial.


Exactly. I wrote a project (https://github.com/EverythingMe/vertex) where you define the API with a semi declarative syntax in Go, and it just translates to swagger so you can generate documentation and playgrounds effortlessly. It worked very well.


Are there any good alternatives?

I used REST which forced me basically to dig deep into online documentation when I wrote a client, but was rather straight forward to implement a server with.

I used SOAP which forced me to write a giant WSDL file, but was rather nice to work with on the client side.

GraphQL seems to obliterate this problem with its auto discovery mechanisms, but I don't know too much about it.


This is GraphQL's killer feature imo. We describe our schema in easy-to-understand GraphQL Schema Language, use that text file to actually initialize our server, and the included GraphiQL UI automatically provides full documentation and interactive query tool. Our users no longer need nearly as much hand-holding to grok the data model.


I had these same issues. It took me considerably more time and effort to write a Swagger spec and get the UI to actually behave than it did to write my entire API and some simple docs in markdown.

I also tried out the "codegen" and a few other projects that generate boilerplate from a spec (for Python) - the code it generated was frustrating, lengthy, and much more complex than the simple endpoints that I quickly wrote from scratch.


> It took me considerably more time and effort to write a Swagger spec and get the UI to actually behave than it did to write my entire API and some simple docs in markdown.

How long did it take to write API consumer libraries in twenty languages and update every one on API change?

If you don't care about that, then Swagger isn't a good idea for you. But I'd think really hard about whether you should care about it if you think you don't.

> the code it generated was frustrating, lengthy, and much more complex than the simple endpoints that I quickly wrote from scratch

Sure--but you didn't have to write it.


Is whatever value people are getting out of client libraries provided by something generic like http://unirest.io ?

When does it make sense to issue API-specific client libraries for a plain ole RESTful API?


All the time? Most "RESTful" (meaningless term, by the way) APIs have a series of convoluted steps that are annoying and tedious to hand-roll.

Authentication is a good basic example. Different APIs will have different requirements for authentication headers. Successful authentication frequently requires multiple steps, like an initial request to get a session token. Many APIs will also require your request to be signed according to their specifications, and to stuff that signature in a header with a special name.

Every API expects to receive data in its own format. I don't want to have to make a bunch of validators that lay on top of your data model. These can and should be provided, and Swagger makes that automatic.

As eropple states, if something is making me do all of that by hand, I'm already looking for something else that will allow me to just say "import project_lib; project_lib.authenticate(); ..."

If you're just talking about something that is read-only and that only emits a single field I care about, sure, no need for a library. Something like the free version of ipinfo.io would be a good example.

Stuff that's more complicated than that, yes, it needs a client library.


> When does it make sense to issue API-specific client libraries for a plain ole RESTful API?

Any time you have a statically-typed language consuming you. Having to write my own Jackson declarations to pull in your API to a JVM project or my own DataContract barf for a CLR one is a quick way to make me hate you, and me hating you means I'm already looking for an alternative that isn't you that gets out of my way.


I sort of thought the opposite, that the client libraries are what is getting in the way.


So you enjoy writing a bunch of boilerplate class files that are manual translations of somebody else's doc?


No, I just like to get started coding API calls.


One could argue that clients can roll their own damn client implementations, and that autogenerating client libraries is the folly. This is just REST, no?


One could argue that, but one would be creating a really sucky developer experience that can be avoided with comparatively little work.

- "Just REST" doesn't actually encapsulate meaningful behaviors by itself. It's by no means complete. Swagger is a partial patch on this by trying to reduce the scope of what your API is supposed to be doing, and it's not perfect, but it's better than "welp, throw Grape at it."

- Most statically-typed languages are a pain in the ass when it comes to HTTP responses because those responses can't be reified without types, and returning JSON blobs that don't have mapping types on the other end super sucks. I can do it myself if I absolutely have to--but making me waste my time doing it is silly.


> "Just REST" doesn't actually encapsulate meaningful behaviors by itself.

Yes it does: open a socket, speak HTTP, get some data back. Yeah you'll have to feed it into a JSON parser or whatever but apparently people consider that a lot of work now?

I do agree on the points with statically-typed languages. Finagling dynamic types in such a system is a PITA. But I do not see the need for autogenerating tooling if you're consuming REST from a dynamic language; all you've automated is the HTTP call and data parsing (which takes very little time to write oneself)

Maybe I've been living under a rock the past 10 years but REST has never required very much effort, either to implement or consume. Then out of nowhere I begin encountering all these weird-ass tools -- Swagger (and who the F named that one, Old Spice?), Grape, whatever. All this crazy software to do all these crazy things when... it's just HTTP and JSON over the wire :/

At some point people need to ask themselves, 'how much abstraction is too much abstraction?'

Swagger's real value-add to me is in a standard documentation format for REST APIs, but the last instance I used of Swagger made my work more difficult :/


One can argue that, but in practice, this often means that the clients will just take their business and go elsewhere.


> I also tried out the "codegen" and a few other projects that generate boilerplate from a spec (for Python) - the code it generated was frustrating, lengthy, and much more complex than the simple endpoints that I quickly wrote from scratch.

For Swagger Codegen, you can also customize the mustache templates in whatever way you want (e.g. replacing 4-space with tab in C# client templates). The auto-generated code may sometimes look lengthy but we usually do it with reasons based on feedback from developers. Please share with us more via https://github.com/swagger-api/swagger-codegen/issues/new so that we can further improve the output.

You may find Swagger Codegen a huge time saver in use cases in which the endpoints are growing (e.g. from 20 endpoints in v1 to 40 endpoints in v2) and you will need to provide API clients in multiple langauges (e.g. Python, Ruby, PHP, C#, etc)

Disclosure: I'm a top contributor to Swagger Codegen


> I also tried out the "codegen" and a few other projects that generate boilerplate from a spec (for Python) - the code it generated was frustrating, lengthy, and much more complex than the simple endpoints that I quickly wrote from scratch.

Personally, I don't understand why codegen is even necessary as opposed to a pure object-oriented client that works directly from the Swagger spec. Before I knew Swagger existed, I actually implemented such a client based on my own made-up spec.[0] It's relatively naive, but I attribute that more to my relative youth in development than to the concept itself.

[0]https://github.com/haikuginger/beekeeper


>I had these same issues. It took me considerably more time and effort to write a Swagger spec and get the UI to actually behave than it did to write my entire API and some simple docs in markdown.

It's definitely going to be faster to just write it at first. Swagger, and everything like it, is an investment.

>I also tried out the "codegen" and a few other projects that generate boilerplate from a spec (for Python) - the code it generated was frustrating, lengthy, and much more complex than the simple endpoints that I quickly wrote from scratch.

That's auto-generated code for you. Can't rival humans (yet), but that doesn't automatically mean it's not useful.

Compare hand-rolled ASM with compiler-rolled ASM for an example. Human-written ASM will often be much more compact and readable, sometimes even more performant, but then you have to write ASM instead of letting the compiler do it for you.


At the risk of linking to another tool which will only get you 80% of the way there...

OpenAPI-GUI [1] [2] is a visual editor / creator for OpenAPI / Swagger 2.0 definitions. It runs entirely client side in the browser.

It's certainly useful enough to start an OpenAPI definition by providing all the boilerplate. Later, as the definition increases in complexity you can cut-and-paste in the editor of your choice.

[1] https://mermade.github.io/openapi-gui [2] https://github.com/mermade/openapi-gui


This is why we developed the Postman Collection format. It's very simple, and expressive. More importantly, it's a way to impose _less_ constraints on the API Designer, while also being richly documented.

Here's an example of a very simple collection: https://github.com/postmanlabs/postman-collection-transforme...

But, you can also go and have something like https://github.com/postmanlabs/postman-collection-transforme..., which is very well documented. The generated documentation for this is available too: https://docs.postman-echo.com

Disclaimer: I work at Postman.


> It's crazy to me that it's harder to write a Swagger file than it is to write the API itself. And there's a lot of tooling that benefits from Swagger, but... I've found they all work 80%. Codegen, documentation, etc get 80% of the way there.

Completely agree. This was one of the reasons I liked using hapijs and joi in the past; they combine to validate and setup your HTTP endpoints and the swagger documentation is literarily generated from that code and not some separate thing you have to write. Unfortunately, outside of that very specific space, very little seems to exist.

Hapijs has its own issues and it's rarely my goto node framework anymore but it's hard to beat doing documentation that way.


Have you looked at the protobuf grpc implementation. I find it way simpler than swagger. It also generates your server code as well.


100% agree, if you can afford to use GRPC. It's much simpler than having to use swagger.


Isn't it more work to write tests than code? Not sure that's the best argument against a technology.

You can easily hang yourself many different ways. The idea with Swagger includes some rules, which is what makes it useful. If you don't want that, then why use it?

It would be nice if you didn't have to write HTML to write a web page, but that's a constraint that has pretty well known benefits to end users.


I use swagger annotations to generate a swagger json automatically from my REST services. It is quite effortless once set up.


Excellent points that are like a breath of fresh air.


Check out Stoplight.io for a good tool for managing swagger documentation. It's a lot easier than managing the YAML itself.


I've found swagger codegen to be really, really inconsistent between different implementations. A few of them - I recall we had a team using Qt - didn't even generate compilable code. When I looked into the infrastructure of the codegen project, I found... mustache.

Check it out yourself: https://github.com/swagger-api/swagger-codegen/tree/master/m...

Mustache is fine for doing a little view rendering, but for language generation... it's really obnoxious to use. Say you want to customize the output. Well, now you're basically replacing one of those magic .mustache files. And what's the model you use to generate those mustache files? Well, you got to look through the codebase for that.

I ended up just not using swagger-codegen, and created my own StringTemplate based system, which got the job done a lot faster. The swagger core model was really janky to build logic around, however, so this system was really implementation specific.

In the end, were I to do it all over again, I would have probably just built a different mechanism. And honestly, building your own damn little DSL and code generators for your use case will probably be faster then integrating Swagger. Especially if you do not use the JVM as part of your existing toolchain.

I've not found anything to support multiple languages easily. If I were to do something today, I'd probably create a custom API DSL, custom code generators, with an output for asciidoctor (which is awesome) and example and test projects that test the generated code. Once you get the pipeline going the work is pretty straightforward.


> I've found swagger codegen to be really, really inconsistent between different implementations. A few of them - I recall we had a team using Qt - didn't even generate compilable code. When I looked into the infrastructure of the codegen project, I found... mustache.

Yup, some generators (e.g. ActionScript, Qt5 C++) are less mature than the others (e.g. Ruby, PHP, C#). For the issues with Qt5 C++ generator, please open an issue via http://github.com/swagger-api/swagger-codegen/issues/new so that the community can help work on it.

> Mustache is fine for doing a little view rendering, but for language generation... it's really obnoxious to use. Say you want to customize the output. Well, now you're basically replacing one of those magic .mustache files. And what's the model you use to generate those mustache files? Well, you got to look through the codebase for that.

Instead looking through the codebase, you may also want to try the debug flag (e.g. debugOperations, debugModels) to get a list of tags available in the mustache templates: https://github.com/swagger-api/swagger-codegen/wiki/FAQ#how-...

I agree that mustache may not be the best template system in the world but it's easy to learn and developers seem pretty comfortable using it.

> Especially if you do not use the JVM as part of your existing toolchain.

One can also use docker (https://github.com/swagger-api/swagger-codegen#docker) or https://generator.swagger.io (https://github.com/swagger-api/swagger-codegen#online-genera...) to leverage Swagger Codegen without installing JVM.

Thanks for the feedback and I hope the above helps.

(Disclosure: I'm a top contributor to the project)


Mustache is the most limiting (not to mention ugly) template language I've seen in a long time. It allows no logic whatsoever and even writing simple if statements is tedious. We have a Swagger spec from which we generate Asciidoctor documentation via a Gradle plugin which works very nice, and generate basic POJOs for Java and POCOs for C#. That... does not work very well for our purposes.

We wanted to produce named enumerables by using custom extensions and found no way of doing it with Mustache. It didn't help that our custom extension YAML was passed into Mustache as serialized JSON. One of our developers took it upon himself to make it work and ended up writing his own simple version of the Codegen which works well enough for us. He tried modifying one of the backends preparing data for Mustache but then said rewriting it on his own was just simpler.


While Swagger might not be perfect (some pain points are addressed with OpenAPI v3) it works IMHO pretty well for us (Zalando) and myself doing API first:

* use a decent editor to write the YAML e.g. https://github.com/zalando/intellij-swagger * do not write any boilerplate code and do not generate code (if that's possible in your env), e.g. by using https://github.com/zalando/connexion (Python API-first) * follow best practices and guidelines to have a consistent API experience, e.g. https://zalando.github.io/restful-api-guidelines/

Most importantly Swagger/OpenAPI gives us a "simple" (everything is relative!) language to define APIs and discuss/review them independent of languages as teams use different ones across Zalando Tech.


Easy is subjective, simple is objective. You probably meant easy ;)


I really like the idea of HATEOAS but I have never seen hypermedia controls done in the wild across any companies I've worked for nor on any client projects. I think it's very cool but a lot of development patterns don't consider it.


I agree that HATEOAS is never deployed anywhere, but I think I'd go further than that.

It's impossible for me to see how it would be possible to write a HATEOAS client, and I can't in practice see anyone doing so.

Optimizing for HATEOAS seems to me to be optimizing for entirely the wrong metrics, and a complete waste of development time and effort.


> It's impossible for me to see how it would be possible to write a HATEOAS client, and I can't in practice see anyone doing so.

There's nothing complicated about a client that understands Hypermedia links. You start at the root, it'll give you a set of links to follow, and you recurse. Here's a browser that can take a HATEOAS-compatible API and will let you work your way through the API: http://dracoblue.github.io/hateoas-browser/

> Optimizing for HATEOAS seems to me to be optimizing for entirely the wrong metrics, and a complete waste of development time and effort.

Correct. HATEOAS is a neat parlor trick: point a dumb client at your API and watch it enumerate all the possible operations. Wow! The problem is that APIs aren't consumed that way, they're consumed by humans who need to decide which API calls to make based on user actions in the application... so developers will resort to consulting documentation in order to piece together the correct calls in the correct order. That's not something you can encode through Hypermedia. At best you can just model the relations between calls, but not the "how" or "why" of everything.


I meant a user friendly client, not something aimed at API developers. Sure you can create something which is aimed at Developers, but it's not presenting the data in a way any user would understand, or in a way that a UX expert can flex.

Fundamentally, your display logic should not be linked to your API schema, but HATEOAS essentially enforces that, because you can't predict what links will be available.


your display logic should not be linked to your API schema

Why not? The whole point of HATEOAS is to design your API schema in function of your application flow. You know, just like we do on websites.


> I meant a user friendly client, not something aimed at API developers. Sure you can create something which is aimed at Developers

Just for clarity, are you talking about API Developers or API Client Developers? I would see the primary beneficiaries of HATEOAS being developers consuming your API - i.e. client developers.


A few counterpoints as I drift by:

The HAL hypermedia format is in use at Comcast, powering the Xfinity TV API. Not only does it take advantage of hypermedia and HATEOAS, it also splits requests/responses in order to make the most use of HTTP caches. https://boston2016.apistrat.com/speakers/ben-greenberg

Dunno if you're serious about not seeing how one could write a HATEOAS-driven client library or if I'm being trolled, but here's one that I wrote a couple of years ago: https://github.com/gamache/hyperresource There are many others as well (especially for the HAL format).


I'm definitely not trolling - it's just that's not how clients work.

I don't write a client by doing a random walk through the API until I get to the information I need, I want to call the API that I know I need to give me the information that I want, and for performance, I want it all to be returned on a single call ideally.

e.g. If I want user B's playlist, I don't want to have to crawl:

/friends/ => ['b': { 'links': {'info': '/friends/619'} }]

/friends/619 => {'links': {'playlist': '/friends/619/playlist' } }

/friends/619/playlist


But it's not a random walk. And it sure beats custom apis returning random bags of data for any given UI screen.

Sometimes you do a few more web calls than you might otherwise. If you can get away with it then great! Because the benefits are plenty

Hell! Even just having the uri of where the resource came from is useful enough. That's before you've introduced related resources.


At this point aren't you still hard coding commands, but using "links" rather than URLs? And doesn't the dependency on every link in the chain outweigh one - easily maintained - URL?


You're hard-coding 'terms' from a hypermedia 'vocab'.

The thing is, if you use well-known terms, your client will work against ANY API that uses those terms, not just the one that you coded it to work against.

Not only that, but the APIs will have flexibility to move things around, or delegate functionality to other hosts by linking through.

As for the cost of walking the chain, once a target resource has been found the can be cached, and you only need to rewalk if the resource 404/410s.


every web browser you use is a hateoas client

you get some html with embedded links and then the browser automatically goes and fetches css, js, images...

the remaining links it just presents to you, the user, to follow or not as you choose

hateoas is not a complicated idea. it's not meant to replace SOAP or gRPC or thrift. it's something else


Except the browser really isn't. It has strict behavior, and the list of what happens as it loads that hypermedia is deterministic and known to both the client and server. The difference between what happens when the browser sees a "stylesheet" link reference and a "icon" one is significant, and not something the browser is expected to figure out on its own.

The HATEOAS idea is that you throw that out, just use some aribitrary XML (sorry, "hypermedia") to return your application state, and that this somehow magically empowers the client to be able to puzzle out all the things that can be done to your state.

Except it can't. It's just arbitrary junk if you don't have a schema and a spec. And it will always be so. Discoverability is (1) not something you can get from a data structure and (2) something best provided by documentation, not runtime behavior.


You obviously need SOME spec or schema to be able to understand what to do with hypermedia, but it really comes down to how many things you have to define. It lets you define a smaller number of pieces and their functionality, and then lets you compose those smaller pieces into larger functions, and even create new combinations after a client is created without having to update the client.


I think you have a completely wrong idea about HATEOAS. The application is certainly expected to be able to handle the data format, not figure out by magic. As Fielding writes in his dissertation, REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types. The client is certainly supposed to understand these data types, that's why they must be standard (like HTML). The dynamic part comes from the formats themselves, which may have variable or optional elements depending on the state of the resource.


Someone needs to fix the wikipedia page on HATEOAS then, because it says exactly the opposite of what you just did in its third sentence.

(One of the other problems with Fielding's work is precisely this word-salad of new jargon and usages, leading to exactly this kind of what-does-it-REALLY-mean-anyway confusion. But that's an argument for a different thread.)


From the wikipedia page: "The media types used for these representations, and the link relations they may contain, are standardized."

As for Fielding's work having a word-salad of new jargon and uses, I frankly didn't get that by reading his dissertation, which I found quite clear. There are a few concepts (Resources, Representations), but I think they make sense in the context.


The browser is driven by advanced AI wetware that understands the semantics of the data and can make decisions on what to do next.

I think REST/HATEOAS purists have always overplayed the browser example.

Pure machine to machine interaction is hard to automate.


You can use game AI techniques like needs-based-AI to create smart, resilient clients. What fun!


> Optimizing for HATEOAS seems to me to be optimizing for entirely the wrong metrics, and a complete waste of development time and effort.

It depends, it's just a way to reduce coupling between a webservice and its client apps. Instead of hardcoding all the URL endpoints into the client, the client has to follow the links provided by the webservice instead. If this layer of indirection has advantages for you depends on your use case. You could compare it to using DNS instead of static IPs. Static IPs work fine but you end up with more coupling than using DNS.


If I remove the reference to a link in the response, it doesn't remove the clients need to:

a) know how to retrieve that link (i.e. what resource do I need to retrieve in order to get that response)

b) requirement for that link to exist (they still need that link).

Requiring that a client know that to get data Bar, they need to retrieve resource A, follow the link at foo[n]._link to retrieve resource B, and then following the link at _link.bar feels like a total non starter, and actually increases coupling.


I am not sure I get your example. But yes the client still has to know what to do in advance.


>But yes the client still has to know what to do in advance.

According to Roy Fielding[1] (who came up with the concepts of ReST and HATEOAS:

"A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience"

So if an application doesn't rely on "standardized media types", and "the client still has to know what to do in advance", then it seems like whatever you're doing, it isn't HATEOAS after all.

[1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


I don't think you and your parent mean the same thing. The client "has to know what to do in advance" in the sense that the human building the application needs to know what user interface they want to build, and know how the API works (in advance) in order to use it to populate the interface.

If you have an extremely dumb interface that just displays fields in API responses, then ok... but that's a very trivial UI that accounts for... basically no useful products ever.


It's called a browser.


I'll be honest, I stopped reading the article as soon as it mentioned Hypermedia, because I instantly saw the problem as the author being upset that Swagger is not Hypermedia.

I have a really hard time seeing APIs ever reaching that level of standardization, and I'm tired of having the Hypermedia discussion. I generally support logically organized APIs with good documentation. As long as those things are true, I'm happy for the API.


HATEOAS if your API needs to maintain state AND you want links AND you want a standard. A game API would be a good use case.

REST if your API should not maintain state between requests. If you want links as a part of your response (and want to use a standard) use e.g. HAL or JSON API.

I view them as mutually exclusive.


TL;DR version:

The first problem is that Swagger encourages codegen, and in static languages, said codegen is often unnecessarily restrictive when parsing input data. Adding a new enum member, and what that does to an existing Java client (that maps it to Java enum, which now has a missing member), is given as an example.

The second and third problems are actually one and the same, and that is that Swagger doesn't do Hypermedia. If you don't know what that and HATEOAS is, this entire part is irrelevant to you. If you do know, and you believe it's a fad, then you wouldn't agree with any points in that complaint. If you do know and like it, then you already know what it is about (it's basically just rehashing the usual "why HATEOAS is the only proper way to do REST" narrative, with Swagger as an example).

The last problem is that if you do API first (rather than YAML first), it's overly verbose, and can potentially leak implementation details into your spec.


At its core, the complaint is just sour grapes from not planning ahead. You wrote the API with the guarantee of an enum, and then broke the gurantee and expected all the code relying on that guarantee to be fine. It doesn't work that way.


Not quite; the point it's trying to make is that having an enum is not necessarily a guarantee that no new members will be added in the future, or at least it shouldn't be.

I think it's half-right, in a sense that this is true for enums that are used for input values only. If the API adds a new enum value, it can also add handling for that value, so existing clients should just work (they just never use that new value). But if the enum is present in any output data, then adding a new value to it is a contract break, because existing clients can see it, and they don't know what they're supposed to do with it.


This seems more like, "issues with how Swagger is used in Java". A lot of Java developers are used to the SOAP APIs of yesteryear, and thus try to create clients with Swagger when they should be using gRPC or Thrift.

In other language paradigms, I haven't faced this issue. Swagger is _just_ documentation, and a nice front-end on top. The Java annotations definitely make it easy to generate, though, I'll give it that.


Thrift and protobufs are underappreciated. Better integration in something similar to the Swagger Editor would give these a much more comfortable home and allow them to see adoption in the web world, where people generally expect things to be a little softer.

I've never really liked the REST paradigm, so I'd be pleased to see it die.

My biggest complaint with Thrift: they still make you do some convoluted hacks to get a two-way ("streaming") connection in the RPC, and when this is discussed, they usually kibosh it pretty quickly by saying it's an unsupported case and that you can find some hacks online, but they don't want to talk about it any further.

This may not have been a big problem for them before gRPC was released for protobufs, but it's definitely something that's worthy of attention and response now. I know lots of people who are going with protobufs instead because of this.

The other thing is that while Thrift boasts a lot of language compatibility, several of these are buggy.


I've never really liked the REST paradigm, so I'd be pleased to see it die.

Don't hold your breath. Fielding's thesis is already 17 years old, so one would expect its philosophy to endure by the Lindy Effect if for no other reason.


If the "Lindy Effect" was a natural law and not simply a shorthand to refer to enduring popularity, nothing would ever die out; its lifetime would continually double. Wikipedia notes this: Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.

There are lots of things in tech that we just stop doing one day. They get replaced by a different hot new thing. I'm sure REST will not go extinct for a very long time, but it definitely could go cold, just like its popular predecessors.


Perhaps REST as the thing you do over http with http verbs won't be around.

But the architectural principle called REST will be around forever because it's essentially the same thing as functional programming.

That REST is cache friendly corresponds exactly to the way that pure functions are memo-ising friendly.


> A lot of Java developers are used to the SOAP APIs of yesteryear,

Hmm way to paint Java developers as old dinosaurs...

> and thus try to create clients with Swagger when they should be using gRPC or Thrift.

They're trying to create clients with Swagger because they've made a REST API (yes you can do that in Java!). If you're using gRPC or Thrift, you're not making a REST API.


REST, if one insists on using it, should really be layered over top of something saner like gRPC or Thrift.

Personally, I've always found REST troublesome and overhyped. There's always a few incidents where you spend hours trying to figure out why something isn't working before realizing you had the wrong method on the request. There's no reason the thing you actually want to do to a resource should be tucked away in some header that's not always easy to access or see, it's just asking for trouble.


What HTTP servers and clients are you using that don't clearly log the request method? I've never seen one where it's easier to read the body of the request than the method.


How about urllib, the Python default?

    def handle_thing(thing):
        r = urllib.request.Request(url='example.com',
                                   data={'stuff': 1})
        return urllib.request.urlopen(r)
Just one example.

I know the tooling has improved somewhat since REST has become extremely common, so this is less of an issue now than it used to be (for example, most people use the Python requests module now, which makes it harder to use the wrong method (though many other HTTP libs still have the older urllib-style design)), but it's still annoying in principle. Combine with the fact that people tend to have different ideas about what the HTTP verbs and response codes mean, and it's pretty yucky.

Compare with Thrift, where you define an interface, list the possible exceptions, and generate stubs that auto-handle all of this communication exchange for you. All you have to do is make sure that you're calling the correct function, which should be pretty obvious.

This differs from setting the correct method in the HTTP headers in a couple of ways: first, HTTP clients usually assume a default method of GET. With a different protocol, there is no default "method", your action has to be defined somewhere. There will be no assumption (unless you code something implicit like this on top).

Second, a more conventional method has increased code locality, meaning the code that affects the operation is likely to be in the same source file/area. You'll normally be calling an ordinary function name like SaveThing inside the application logic flow, and it will be easier to debug, easier to realize the problem, etc.; the operation to perform is not tucked away in some other contraption that affects the headers.

Is it possible to design REST codebases so that such errors are hard to cause? Sure. But why do it the REST way and make it harder on yourself?

It should be just as easy to see what function you're asking the API to perform on a resource as it is to see what you're sending it. The operation I want to perform is an intrinsic part of what I'm doing. There's no reason to separate it and make it hard. I'd even prefer url-based actions, like example.com/string_save, because then at least the resource and operation are defined in the same spot.

A simple JSON envelope that has an "operation" key separate from a "data" key would make this easy, but then it's not in your header anymore, so it's not "real REST".


[flagged]


Uncivil comments aren't allowed here, and especially not personal attacks, which we ban users for. Please don't post like this again.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


>You're going on about the value of "conventional methods" in a criticism of using HTTP verbs?

Yes? These have some intrinsic value that is defined in the domain. There is no potential of vocabulary contamination because everyone comes into the domain with a clean state. An ambiguous, abused common vernacular is worse than a clean domain-specific one.

>Understanding the difference between a GET and a POST isn't some new-fangled idea, it's decades old. Like Tim Berners Lee old.

GET and POST usually work out OK; it's stuff like HEAD, PUT, and PATCH that people usually argue about, not to mention response codes.

GET, POST, and the HTTP intricacies are fine as a concern for HTTP clients. REST has shot through that and made it something that everyone has to worry about.

The issue is not only that no one can agree on how to do REST, but it's also, as stated in my other comment, that REST makes it harder to see what's going on. It requires important information (the verb) to be tucked away into something that takes extra steps to access. I'm not saying it's impossible to access that; I'm just saying it's error prone to do it this way.

>I didn't even know what Thrift was until I Googled it, and found out it was developed at Facebook. Okay.

I hate most "Project-By-BigCo" projects, but not everything that comes out of Google or Facebook is automatically evil.

Thrift is now controlled by the Apache Foundation, not Facebook. And it's a very common IDL, but I'll refrain from returning your snark. ;)

>I'm stunned. Do you write code that runs on the internet?

I'm not the only one. [0]

>How did you even get employed?

Just lucky I guess.

[0] https://twitter.com/skamille/status/588713316358475776


The edit timeout is expired on my other comment, but it just occurred to me that, since you were unfamiliar with what is arguably the most-used IDL today, you may not understand that interface definition language like Thrift defines the interface, not just the objects. You'll have a section like (adapted from the Apache Thrift tutorial file [0])

    service Calculator extends shared.SharedService {
       void ping(),
       i32 add(1:i32 num1, 2:i32 num2),
       i32 calculate(1:i32 logid, 2:Work w) throws (1:InvalidOperation ouch)
    }
which lists the method name, parameters and types, return type, and possible exceptions. The objects are defined elsewhere in the file (or in an include). The method name is not just a value that is randomly assigned by the developer (and how could it be? the interface has to name the things so they can be referenced).

To be a firm REST religionist, as you seem to be, you must not have worked with it very often, but you can see that an actual IDL, and Thrift is just one of several, would make things much easier than the loose "My REST is purer than your REST" dick-measuring contests.

[0] https://git-wip-us.apache.org/repos/asf?p=thrift.git;a=blob_...


I'm not even sure what you mean. RPC is fundamentally different than REST and I don't know how you can layer on over top of the other?


RPC is not fundamentally different from REST. REST is a form of RPC. (/me ducks tomatoes thrown among boos and hisses from the crowd)

The difference is that the processes behind REST speak with HTTP. You're still doing a "remote procedure call"; you're asking for a remote process to execute some function on your behalf and return the result. RPCs facilitate the same exact thing. How is this a "fundamental" difference?

[I'm speaking here of the practical difference, not some difference that was originally hypothesized in the dissertation.]

Thrift contains both an RPC and an IDL, but they don't necessarily have to be used together. Protobufs is just an IDL; gRPC is the RPC, which was released just a year or two ago.

You would layer Thrift and REST by putting a REST API over the top of an interface defined in the Thrift IDL. You could also run the Thrift RPC for Thrift-compatible clients.


RPC doesn't have an Uniform Interface.

Say you have a user profile, which has a Gravatar associated. Sure you can have a getUserProfile() procedure that fetches the user information, but what about the image? You can write a getUserAvatar() procedure that proxies it, but that's wasteful.

In a RESTful system, you have a Resource Identifier (URL) that you can indicate as an hypermedia reference (link), with which the client can use a standard data fetching verb (GET) to retrieve it directly from the other server.

Of course, in a practical setting the getUserProfile() procedure would return the URL, but that's just an admission of the limitations of RPC vis-a-vis REST.

On the other hand, REST itself has its own problems and limitations, and is certainly not adequate for every use case, as Fielding's dissertation mentions at length.


REST doesn't have a uniform interface in practice, either.

I haven't read the dissertation so I can't really comment on the hypothetical REST (though it is on my reading list now, and not that it really stops anyone else), I can only comment on what, in the real world, passes for a "REST API".

"REST" principles certainly sound nice on paper, but for the most part, it's clear that they're completely implausible to realize in a wide-scale, meaningful way. After over a decade of pro-REST propaganda, people still can't even tell if their interface is "RESTful" or not.

The parts of "REST" that have worked are the two simple basics of HTTP: GETting a resource to read it, or POSTing a resource to write it. Nothing else has really stuck or can be expected to have a uniform meaning (and even POST's behavior will vary, with some doing an upsert-style operation and some accepting it only for new writes and using PUT and/or PATCH for edits). 200 OK means it worked most of the time, but sometimes people will return a plaintext error with it. 404 might mean that the resource is not found, or it might mean that the route is not found/no longer valid (or that it was called with invalid or improperly encoded parameters). There are a bunch of esoteric codes that are used to mean a lot of different things, always depending on who the implementer is.

So the "uniform interface" is just that everyone is using HTTP, to mean all sorts of different things. In practical terms, it doesn't really amount to much, except a lot of blathering over whether something conforms with a theoretical ideal that everyone has already demonstrated they're unwilling to conform to.


REST APIs must be hypertext-driven[1].

The term "REST," like so many others, now "means" something totally different than what the inventor of the term intended.

[1]: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


REST has been extremely successful, having scaled to billions of services, like the one we're using right now.

"RESTful APIs" have mostly been a failure, I fully agree with you on that. The difference is that I think it's a cultural problem; when people like Licklider sensibly thought about a future wherein computers would learn to talk to each other (of which REST is a first step), as static protocols obviously didn't scale, we proved him wrong by sheer brute force. Watching something like Bret Victor's The Future of Programming, it's hard to miss the giant indictment of our whole profession.

That said, since I know the term is hopelessly lost, I don't push for REST either. Our service uses XML/JSON-RPC.


There is absolutely no reason why you can't implement the same REST concepts on top of another RPC mechanism.


Of course you can. But x86 machine code doesn't have Monads just because Haskelll can be compiled to it. In both cases, the concepts have to be broken up to be implemented, as they are not meaningful at that level.


Can you please explain. Which REST concept cannot be implemented? I think the problems that REST/SOAP solve,is that the "levels" that you speak about, have already been implemented by someone else. I am talking of serialization, transport level details (e.g. SOAP over JMS) etc. If even transport level abstraction has been taken care of by the implementers, then which concept do you think, cannot be implemented.


In my team we established a process where our first step is to write down the spec of the API, in swagger format.

The spec is the source of thruth, and is written manually in yaml (that's not that painful). The implementation comes later. Unit tests check that it conforms with the spec. [1] We also have a fake API almost completely autogenerated from the spec that's quite useful for preliminary testing of the clients.

Client code generation wasn't up to my expectations, but I've experimented with applying an handwritten template to a parsed spec and that seems viable.

Swagger as a format might have its quirks, but the tooling is there and having it as the authoritive source of thruth paid off for Us.

[1] https://bitbucket.org/atlassian/swagger-request-validator


We use swagger heavily and I can say the following: 1) We use the tolerant reader pattern. It is entirely possible to generate tolerant client code where the enum and adding new element does not cause issues. The problem here is not swagger but poor code gen.

2) We use hypermedia in all of our APIs extensively, I don't see how this is impacted by swagger? Hypermedia is a runtime discovery mechanism and so other than declaring the link structures in your schema, has no place in swagger. We don't use HAL, and wouldn't recommend it either.


Totally agree on the codegen bits.

> To mitigate this problem just stop generating code and doing automatic deserialisation. Sadly, I have observed that’s a very hard habit to break.

That conclusion really irks me. In Java, if you're using AutoMatter for example, you're one annotation away from discarding unknown fields and many other serialisation frameworks offer the same ability. It's not a default (which imho it should be) but it's trivial to fix.


I'm surprised RAML isn't suggested as na alternative. http://raml.org/


Or API Blueprint. Markdown is a really nice format for documenting your interfaces and you don't really need special tools to render the documentation for the web. Just checking the file into GitHub creates a link to human-readable documentation.


Does it address the issues in the post?



> To mitigate this problem just stop generating code and doing automatic deserialisation.

No! WTF?! Just use generators that produce code that is resilient to a changing API. Why would you get rid of a huge productivity boost because the generator doesn't produce code you like? That's trading two hours of work to change the generator, for many multiple hours of reproducing the proper API, especially if you have many to support.

I stopped reading right there. My personal biggest issue with swagger is that they threw the baby out with the bath water, reproducing XSD in Yaml, for no good reason. The UI they produced was nice, and that is probably the best feature of swagger. But the data format doesn't solve a new problem IMO, it just created a new standard that we all have to deal with.

What is that now? Corba IDL, XSD, protobuf, thrift, avro, Swagger, Raml... I'm sure because of the flaws in each of those, we really should just use the new OpenAPI.

Or just get rid of them all and go bare metal with Cap'n Proto. Oh but don't use code generators for any of those, because that would make it way too easy to support all of them with a single API </sarcasm>.


> The rest of the URIs are provided by Hypermedia.

I know this sounds awesome but in practice, it's really useful to have my swagger UI exposing the endpoints for our front-end developers to consume. What a pain it'd be for me to tell them "hit / and see what you get!"

Having HAL links between resources is great and this discovery aspect of HATEOAS makes a lot of sense in development. But having a single entrypoint "homepage" to the API, when it comes to swagger, doesn't make sense.

I ran into this when I asked another department for the endpoint to hit for certain data. I was given a swagger page with a bunch of "/status" endpoints that would then reveal additional endpoints. Who knows what rabbit hole I was sent down. I just needed the endpoint and the necessary parameters.

If I were a third party or some outside developer consuming the API, it kind of makes sense. But our internal swagger docs really should reveal endpoints. I would feel like a big asshole if I asked my front-end co-worker to just "hit /status" and see if you get what you need!

Disclosure: I don't use Swagger codegen. I only use the Docblock markup to document my API and generate the swagger.json that I display on our docs page.


An alternative is to just ditch these complicated documentation formats altogether. Put down your pitchforks, I'll explain.

The author of REST, Roy Fielding stated that only the entry URL and media type should be necessary to know in advance. Swagger would be considered out-of-band information, if it's necessary to have this documentation beforehand then it doesn't follow this constraint. Interactions must be driven by hypermedia, an idea which has a very unmarketable acronym.

The alternative that is suggested is to document media types, not APIs. If HTML were not a standard and every web page had to define its own technical specifications, the web wouldn't have taken off, because it wouldn't have been interoperable. Interoperability is key, it reduces friction from transitioning between web pages on different servers. How HTTP APIs are built today is wasteful, there are vendor-specific tooling costs up front and it doesn't scale.


What is an example of a well done API that functions the way you describe?


I would never write Swagger by hand; why should I when I can have it generated? We are using Swashbuckle[0] to generate Swagger for our ASP.NET Web API, which have been a great experience. We can explore and test the API in Swagger UI. I have been sprinkling a bit of hypermedia on top of this with HAL, mainly just for having links. I have never met anyone wanting to go the full HATEOAS route, but simple links can go a long wa. Swagger UI have been great for this, as HAL alone isn't really expressive enough to document what a link means. On the consumer side, I have been using NSwag[1] to generate clients with good results.

[0] https://github.com/domaindrivendev/Swashbuckle [1] https://github.com/NSwag/NSwag


> why should I when I can have it generated?

Because maybe you work on a team where half are creating an API and half are creating a client, and if you write a Swagger spec first, you can both be working at the same time, against the same contract, and just meet in the middle? And if you're working on the consumer side of things, you can take that spec and stand it up against a mocking engine that will now give you something to test against while your API team finishes their work? Just because you would rather generate Swagger doesn't mean there's not a reason to write it by hand before writing code.


One can still do what you are describing and have the Swagger spec generated. On my platform, I would just specify data types and the interfaces, and have Swashbuckle parse this and spit out the Swagger spec. No need to hand-code Swagger while creating the contract up front. After this step, one could work at both sides of the contract independently as you describe.


So many problems in programming are caused by trying to replace code with configuration. I've learned over time that the DRY principle can be harmful. Avoiding repetition is only good when it remains equally readable and powerful.

Defining a DSL is almost alway a better idea (but more difficult) than defining a configuration format.


What is the difference between a DSL and a configuration format?

I mean the essential difference. ;-)


People often don't make a crucial distinction: Are you developing an API that happens to be accessible via HTTP or are you developing a Web Service.

For an API where you control both the server and the client, you don't need to use REST.

For a Web Service, where you don't control which clients are using, you are better off with a RESTful implementation supporting hypermedia. Especially if you are interested in keeping clients happy and not give them the middle-finger shaped like a incompatible-v2 version of your service.


API Blueprint is much, much cleaner: https://apiblueprint.org

Here's a renderer: https://github.com/danielgtaylor/aglio

It's less feature-rich than Swagger but the format is much less of a nightmare.


I stopped reading after the author's silly interpretation of enums and why they "weren't" useful. The rantings of a beginner don't make for a good critique.


I read "YAML, which I call XML for lazy people"... and closed the tab.


why?


So here's something I've never quite understood when reading about HATEOAS:

> Without hypermedia, the clients would probably have to parse the payload to see if there are some kind of status and then evaluate that status to take a decision whether the order may or not be canceled.

In the given example, wouldn't the client still need to check whether there actually is a `cancel` link (and know that it _can_ be there), and decide whether or not to call it? In other words, isn't it unavoidable that there's business logic in the clients?


"..swagger repeats itself a lot with redundant annotations, it also leads to having more annotations than actual code."

(Shameless plug) For this exact problem, we developed a Java library that makes easy to create RESTFul services that's highly integrated with Swagger. Here it is: https://github.com/buremba/netty-rest

We also tried hard to stick with Swagger-codegen but it's far from being stable so eventually we ended up creating a Slate documentation that demonstrates the usage of API with high level HTTP libraries for various programming languages.

We convert Swagger spec to HAR representation, create the example usage of the endpoint with httpsnippet from the HAR (https://www.npmjs.com/package/httpsnippet) and embed it in our Slate documentation using our Slate documentation generator. (https://github.com/buremba/swagger-slate)

Here is an example: http://api.rakam.io/


> The original problem swagger tries to solve is: API documentation. So, if you need to document an API, use a format that is created for that purpose. I highly recommend using Asciidoctor for writing documentation. There are some tools for Java that help you with this. However they are somehow technology specific. For instance Spring REST Docs works very well if you are using Spring / Spring Boot.

I have used Asciidoctor + Spring REST Doc to document a complex REST API and my experience was almost complete the opposite for a number of reasons.

1.) Asciidoctor is powerful but exceedingly tedious to edit. It's especially painful to use for documenting an API in advance--wikis like Confluence are far easier to use and also allow commenting.

2.) Spring REST Doc generates snippets of asciidoc text that you must laboriously incorporate into the parent asciidoc. It's particular painful when combined with maven (another source of pain for some of us). Anybody who has asked the question "where do I look in target to find my output?" as well as "now that I found the output how do I make it go somewhere else instead?" knows what I mean.

3.) The unit tests that Spring REST Doc depends on to generate output are hard to maintain for anything beyond trivial cases. I've spent countless hours debugging obscure bugs caused by Spring IoC problems. Also, the DSL format used to define output of examples is hard to understand--just getting URLs to show https schemes and paths takes time.

Finally, I would disagree that Swagger is purely designed to document existing REST interfaces. We're using it to design the interfaces in advance. It's not perfect but works better than other tools I have found let alone informal specification via a wiki. Spring REST Doc is close to useless for design.


It looks like the perfect workaround is to use some middleware to autogenerate the Swagger docs, at least to keep code and documentation in sync. After a while I found myself doing the opposite, using Swagger to autoconfigure endpoints (https://github.com/krakenjs/swaggerize-express) and even mongodb models (https://github.com/pblabs/swaggering-mongoose). Here is my actual reference architecture: https://github.com/pibi/swagger-rest-api-server

Best thing about this approach is the clean separation between API definitions and the implementation (all over the stack), so the teams can just discuss about how to organize the resources and how to use them.


Here is a cached version as the site is currently giving me an HTTP 500: http://webcache.googleusercontent.com/search?q=cache:uQABqVC...


To be honest I didn't knew that Swagger could generate API - still i wouldn't use anything that generates code from some description (attempts I heard of didn't worked out well in the past).

In projects I worked in Swagger was used to generate callable API from existing implementation: add annotations (which are ugly :/, especially in Scala, where such javaism hurts eyes even more), generate json, open it in Swagger UI and let devs play with API.

What hurt me recently was 3.0 release - basePath got broken so, I got calls generated with double slash (`host//api/call` etc.), and oauth2 is completely broken and I cannot authenticate requests. 2.0 works with no issues, though I find it sad that the vanilla version I downloaded uses hardcoded client_id and client_secret.


My problem with Swagger is the Swagger editor. It just works sometimes, but sometimes everything is just red for no reason. Tried with Safari, ChrOme, Ff on different Macs. Am I the only one who thinks this tool is unusable?

Edit: Cheers to you guys at Novatec, have to take a look on InspectIT again.


It's definitely buggy. I've had more luck with it in Firefox than other browsers. Refreshing and changing a line or two [usually] snaps it back into shape.

Export your file frequently.


I think Swagger is nice--codegen makes it simple to generate API clients for various languages/frameworks. Saves a lot of time and potential sources of errors. If there is something easier/better/more reliable, then I am all ears, but Swagger keeps getting better.


Why are folks using Swagger or other tools for supposedly simple REST services at all? It's not like Swagger is a "standard" or something, and from the comments it appears using tooling for REST is a zero-sum game or even net loss.


I was engaged in a smallish project recently where they used Swagger as a shared resource that synced server dev with mobile dev. It didn't work that well, as you can imagine.. I had to wait for the server folks to update/fix Swagger docs all the time. They also thought that they need not tell me when a breaking change occurred, because "you will see it in Swagger anyway". So I pulled changes regularly, and generated iOS code, and suddenly it broke, and then I had to hope that one of the server team is currently available for consultation.


We started it using it for it's documentation and developer console generation ability at first, I have to say that I am much happier when I get given a swagger definition for an API, it lets me generate a client pretty much instantly and as long as the definition is written well it more often than not halves my integration time


I don't know if you're aware of the SOAP vs REST debate 10 years ago or Web Services vs CORBA before that. Client generation, or any code generation at all, was seen as a big no-no in the REST camp, so I find it ironic that today this is used as Swagger et. al.'s saving grace. What I frequently see is that inherently action-oriented protocols are crammed into resource-oriented REST facades, only without any formal interface descriptions at all, and without formal transactional or authorization semantics. OTOH, JSON REST services can't be used without browser-side glue code at all so the Web mismatch argument isn't working either. Makes you really wonder if there is any rationality or progress in middleware usage at all.


Most of developers use Swagger wrong. But this is the right approach: https://github.com/swagger-api/swagger-node/

Swagger is a contract. From contract you can generate documentation, client, input/output data validation, mock responses, integration tests, and something else, I'm sure.

If you start development from swagger — you can get everything in sync. That way you can't forget to update documentation, validation rules, tests, or whatever you generate from swagger. That way you can do work once! No work multiplication!

It makes development So Much Easier!


I also looked at swagger but found it needlessly complex for my needs. I opted for blueprint api coupled with aglio to generate html docs and also provided a nicely prepped postman collection including testing :)


I think Koa (or Express) are the best tools for developing an API spec/mock/doc that's useable and useful from the get go. You can have an endpoint working in seconds, and it's easy to add all the different scenarios you need. Dropping in middleware is easy too. And if you write your code well, it's self documenting.

And ultimately, it's documentation for developers. I think it's as easy for a non-js dev to parse and understand an endpoint controller than it is to parse whatever freaky-deaky API documentation someone has cobbled together.


Whilst Swagger Codegen doesn't create perfect libraries and some of the generators don't support all the features, once it's set up, it's just as easy to create a client for every language as it is for just one.

For example, we create multiple client libraries, HTML documentation and a partial server (so we don't have to manually write the parameter parsing and models serializers).

Another advantage is you can start consuming the API as soon as the API design is agreed, by using a generated mock server instead of waiting for the real one to be implemented.


In terms of API documentation, the biggest problem is making sure the documentation is in sync with the actual code. I'm looking into using JSON Schema [1] along with Swagger and Dredd [2]. Making it all language-agnostic is a key. If anyone is doing anything similar, please, share your experience.

[1] http://json-schema.org/

[2] http://dredd.readthedocs.io/en/latest/


We have hundreds of APIs, and we use RAML + json schema.

Since Swagger (OpenAPI) seems to be gaining ascendancy, I recently (some months ago) looked at migrating off of RAML, but at that time the Swagger guys had a philosophy that they would only support a subset of json schema.

I get their reasons - they want to be able to generate code. But the things they didn't support (like oneOf - needed whenever you have a set of resources of varying types) are a show stopper for many APIs with even moderately complex payloads.

For us at least, its more important to have a single source of truth for our APIs than to be able to generate code from the specs. Hence we remain on RAML (which seems great - just looking like its losing the popularity contest).


We also use RAML + json schema primarily as the single source of truth. We use https://jsonschema.net to generate json schema by providing it examples.


in my opinion oneOf is bad for interoperability


Ensuring documentation is in sync with code is straightforward if you have tooling in place to generate documentation from code. For example, at edX we use Django REST Framework and django-rest-swagger. Our Swagger docs are always up-to-date.


This is a bit off topic, but I feel that Swagger etc... and actually most API docs are to human centric. You can't auto chain APIs together without a lot of developer intervention. The one thing I saw that allowed that was SADI services [1]. An awesomely pragmatic use of the semantic web. Pity it did not get picked up by Google etc...

[1] http://sadiframework.org/


I with there was a way to easily generate a swagger spec from a java project as a build artifact out of the box, instead of having to serve the spec as a dedicated API endpoint. There are some plugins, such as swagger-maven-plugin [0], that do give you this functionality, though.

[0]: https://github.com/kongchen/swagger-maven-plugin


Agreed with some and not all. In essence anythung can be abused. Seagger being URI is well documented well argued problem. And no support for hypermedia too. Asciidoc and Spring Rest Docs shine there.

But annotations is not IMHO. Swagger also doesny make thing more complex, they already are. Swagger also doesnt amke reading docs difficult as such *apart from URI problem above.

Also Spring Rest learnt from mistakes of Swagger so its a bit unfair on swagger.


Checkout Spring REST Docs http://docs.spring.io/spring-restdocs/docs/current/reference... . Always uptodate docummentation, which double serves as tests, support HATOAS and more..


For those that find it tedious to write swagger by hand, I've been using Stoplight[1] at work and it's been working pretty well. You can create the spec from scratch, or use their proxy to auto-document your api.

[1]: https://stoplight.io/


I am working on auto generating clients with built in service subscription based discovery, shock absorbers and circuit breakers based on open api (swagger). We design our APIs with customer focus and therefore haven't run into some of these problems so far.


Problems with Swagger, it is really unfortunately named. I have never used Swagger, to me it projects an aura that it is a quickly hacked weekend project its authors did not put any effort to at least name properly. Yes, I know it is irrational.


Swagger is also a partial solution. I'd rather provide an API client library and clearly document how to use the client library and not the underlying REST API.


XSD is bulletproof. Why don't we just keep using XSD?


Because people don't want bullet proof. They want quick wins that work straight away.


"For every complex problem there is an answer that is clear, simple, and wrong." :)


Quicker, easier, more seductive


maybe b/c it's for XML and not JSON, which most REST apis use.


If you don't use namespaces, you can validate JSON against an XSD. When I'm doing a REST API in Spring, I'll use and XSD and XJC (XSD Java Compiler) to generate model classes. The API endpoints are able to process those models as XML or JSON with 1 line of config.


One problem... it takes Swagger thirty-freakin-seconds to load its auto-generated whatever for Magento Community, and it has to do this every freakin time!


That's... odd. Even running M2 off of DevBox on my Mac the swagger endpoint renders in 5-10s (first visit).


Oh, man. There is so much wrong with this article. Here we go:

> The documents are written in YAML, which I call XML for lazy people

No. That's ridiculous. XML (and JSON, for that matter) is designed to be read and written by machines, not humans. (If the design goal was actually primarily for humans to read and write it, the designers failed. Miserably.) YAML is a nice middle ground, in that it can be unambiguously parsed by a machine, but is also fairly pleasant and forgiving for humans to write.

> The enum thing

This is a problem with the code generators, not with Swagger as a spec. Any API definition format that allows enums (and IMO, all should) will have this "problem".

Language-native enums are way better to deal with than stringly-typed things. An alternative might be to generate the enum with an extra "UNKNOWN" value that can be used in the case that a new value is added on the server but the client doesn't know about it.

However, I would consider adding a value to an enum to be a breaking API change, regardless of how you look at it. What is client code expected to do with an unknown value? In some cases it might be benign, and just ignoring the unknown value is ok, but I'd think there are quite a few cases where not handling a case would be bad.

I agree with the author that "adding a new element to the structure of the payload should NEVER break your code", but that's not what adding an enum value is. Adding a new element to the structure is like adding a brand-new property on the response object that gives you more information. The client should of course ignore properties it doesn't recognize, and a properly-written codegen for a Swagger definition should do just that.

> Nobody reads the documentation anymore.

The author admits this issue isn't specific to Swagger, and yet harps on it anyway. What?

> No Hypermedia support ... Which means, you can change that business logic whenever you want without having to change the clients... Swagger is URI Centric

Oh god. I don't care what Roy Fielding says. No one has embraced hypermedia. It's irrelevant. Move on.

Being able to change biz logic has nothing to do with hypermedia. That's just called "good design". That's the entire point of an API: to abstract business logic and the implementation thereof from the client.

Regardless, the entire idea of being able to change your API without changing the clients is just silly. If you're changing the API purely for cosmetic reasons, just stop, and learn how to be a professional. If you're changing the API's actual functionality or behavior, the code that calls the client needs to know what the new functionality or behavior is before it can make use of it, or if it's even safe to make use of it. I imagine there are some small number of cases where doing this "automatically" is actually safe, but the incidences of it are so vanishingly small that it's not worth all the extra complexity and overhead in designing and building a hypermedia API.

APIs are not consumed by "smart" clients that know how to recurse a directory tree. They are consumed by humans who need to intelligently decide what API endpoints they need to use to accomplish their goals. Being able to write a dumb recursing client that is able to spit out a list of API endpoints (perhaps with documentation) is a cute trick, but... why bother when you can just post API docs on the web somwehere?

This section is irrelevant given my objections to the last section.

> YAML generation (default Java codegen uses annotations, codegen via this way will leak implementation details)

Well, duh, don't do it that way. Do API-first design, or at least write out your YAML definition by hand after the fact. If nothing else, it's a good exercise for you to validate that the API you've designed is sane and consistent.

> Swagger makes a very good first impression

Yes, and for me, that impression has mostly remained intact as I continue to work with it.

> What are the alternatives?

Having worked with both Spring and JAX-RS, I find it hard to take someone seriously if they're strongly recommending it as a better alternative to something as fantastic as Swagger. Also note that the author previously railed on the reference impl Java tool for its reliance on annotations... which... same deal with Spring and JAX-RS.


JSON follows JavaScript syntax, which is specifically meant to be written manually, i.e. by humans. (This is one of problems of JSON, by the way: look at commas, for example, especially at the infamous problem of trailing commas being illegal. This is definitely not meant to be written by machines.)

XML is indeed for machines, that is, the markup part of it. YAML may be more readable, but note that the specification of YAML is about three times as large as that of XML (and the XML specification also describes DTDs, a simple grammar specification language). XML design goals are explicitly stated in its specification, you're free to take a look.


Any non-trivial format that's meant for humans MUST at the least allow comments and should not force humans to write needless characters that would be trivial for a machine to do without (e.g. quotes around key names). Also, I'm flipping the bird to any format or language that doesn't allow me, a human, to use multi-line strings. And in JSON I can't even at least concatenate a few of them into a longer string. JSON is definitely not meant to be written by humans.


> The Problems with Swagger

> This is a Swagger misusage problem, not a problem from Swagger per sé.

The TL;DR is a list of gotchas and best practices the author has learned from using Swagger.


Almost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: