Hacker News new | past | comments | ask | show | jobs | submit login
Interfaces Generally Belong with Users (neugierig.org)
89 points by kogir on Nov 23, 2019 | hide | past | favorite | 39 comments



So you just copy and paste the subsets of the Backend methods and their signature into separate interfaces that only have the methods needed for the user function and its test? That doesn't sounds very good.

I'd rather have one global "backend" interface and have mocks use that. If the type system gives me issues with unimplemented methods, than that's the problem with the type system itself -- allowing for partial interfaces (through keywords or something) should be a first-class feature.


If I take a complete Backend but only use 1 of 10 methods, that relationship is better expressed as an interface with that single method. Constraining the possible behavior of that code via this kind of contract makes it easier to understand (the signature says more by permitting less) and test (the interface provides a natural mock point).


Can’t speak for Go, but TS does provide the `Pick<T, K>` construct, essentially meaning “from a type T, I only require a subset of properties K”. This avoids needing to duplicate partials of the core interface.

It does now strongly couple you to the core interface—you can’t put this function in a separate module that’s unaware of the core interface anymore—and now if the core interface changes, the function implementation itself will break instead of the call sites (which could be a good or a bad thing).


>through keywords or something

The whole point is to avoid this. Currently in many languages you could trivially generate method stubs that throw errors or some such thing but that still slows down iteration and opts you into far more baggage than necessary.


This is good advice, but the benefits and advantages are not entirely explained.

The approach is that client's should declare their own interfaces and libraries/backend's should implement those interfaces.

- So instead of:

    app > depends on > api
- It's

    api > depends on > app interface
    &
    app > depends on > app interface
Basically another version of [1] "The Dependency Inversion Principle", which allows you to break hard dependencies into lightweight interfaces.

[1] https://en.wikipedia.org/wiki/Dependency_inversion_principle


I extend this to explicitly have an adaptor in the middle:

    app > depends on > app interface

    service > implements > api 

    adaptor > composes (api >> app interface)
That way the app doesn't need to know about the API, nor the API about the app. The wiring up the two are managed by the adaptor which is free to be implemented on the client/app side or vendor/service side.


I'm super confused what's the point here. There seems to be something about how abstract interfaces (i.e., interfaces that are separated from the implementation) are good for testing (and more?) because there is less coupling (also at build time) to other modules.

It might be hard to get for me as a C programmer because we make manually decoupled interfaces all the time (we create header files explicitly). In a sense all interfaces in C are abstract, and only the linker matches actual implementations to the interfaces. I believe with explicit header files there isn't any of the issues mentioned in the post.


Nope. The author is a C programmer too.

The post is talking about who writes the interface that client code relies on. Is it the author of a package (eg like header files) or the user of the package?

In Go and Typescript its easy for users to write their own interfaces describing just the functionality they rely on. This makes testing a lot easier.


Before someone “but Haskell”’s me:

... In Go, Typescript, and many other languages, but sadly lacking in many of today’s most popular languages...


How does the compiler know that the implementing class can be used where the interface is required?

Is there still something like "Backend extends ProfilePage" in Java?


> How does the compiler know that the implementing class can be used where the interface is required

Structural typing.

The instance does not assert to the compiler an exhaustive list of all interfaces it claims to conform to; instead, the compiler determines, during compilation, whether the concrete instance being passed meets the requirements of a declared interface. The net result is that an implementation need not have known about an interface when it was written; it need merely match the interface’s requirements when called upon to do so.

It is, effectively, statically enforced duck typing.


Not like "extends" or "implements".

If a class implements all the methods of the interface (as seen at compile-time - same method name, arguments, returns) then that's it, period. Compiler will allow you to use it. Beyond that the interface doesn't "know" about the class and the class doesn't "know" about the interface.

So it doesn't matter the interface didn't even exist when that class was written.


Sounds a lot like Python's duck typing to me. (Although Python would coerce the types if they don't match.)


It is. "Structural typing" is basically "compile time duck typing".

One major notable advantage is that you are checking the existence of every method you may call, not just the ones your tests happen to use.


Javascript (without typescript) also has duck typing. The difference with structural typing is that the interface is formalized and enforced compile time.

Python 3.7 added compile structural typing using what they call Protocols. It works very similar to typescript. See https://www.python.org/dev/peps/pep-0544/


Similar in some ways, but duck typing is a misnomer because it doesn't involve typing.

These are type classes, or what Rust calls Traits.


> duck typing is a misnomer because it doesn't involve ...

dang, the joke was so close


It's great to see this blog post get published. Evan is one of the most creative engineers I've gotten to work with.

Go's structural interfaces are often harped on, but this is exactly a case where they are very useful. Perhaps an approach is to use named interfaces for variance in implementation strategy, and structural directives for usages as described in the blog post. I wouldn't discount how often this pattern comes up.


I think allowing extensions on existing classes/structs can also solve this problem without having structural typing.

ie the client can create a new interface and explicitly implement it for existing classes/structs.

I think Swift and rust allows this.


Great post. The author makes their point clearly and concisely - and what's more, they're absolutely right. The recommendation of this post might seem strange at first, since in some ways it contradicts some other programming heuristics. But it can make dependencies much more clear and explicit.

Though, Go and Typescript are not necessarily the best languages in which to fully embrace this idea. This is a scenario where type inference is very useful.

As other comments say, it can be a hassle to define sub-interfaces for each function, which can lead to a fair bit of duplicated text. But in a language with type-inference for function types (instead of only within-function type inference), the used interfaces can be inferred instead of explicitly written down, providing a lot of flexibility and allowing this principle to be followed in more cases.


BTW:

>>> the obvious thing is to do in a nominally typed language like Java or C++ is the "extract interface" refactoring, where you create an interface [...]

well, first of all, I still think that a library (even an internal one) should expose only interfaces by definition. It's not only for the sake of the tests, it's also because it forces to keep boundaries "clean".

and by second... what I normally use is to trow in mockito[0] and let it do the subclassing, instantiation and definition of custom methods in 2-3 lines...

[0] https://site.mockito.org/

I'm not sure that "make it easier to define interface <a posteriori>" it's a good selling point to structural typing


This is a good article about structural interfaces which makes some positive points. I don’t have a rebuttal per se, but the testing part of this article is an interesting side topic.

In the specific example given for the given product, a real life developer would write all the code to get the UI up and running, test it works, and probably just ship it.

What do you need to write tests for? Well: there’s nothing wrong with writing tests and a better developer would include at least some testing with their new product feature. They’ll certainly want to know if their product breaks in production, and to be alerted automatically, detecting any regressions.

If you write a test around the backend code with a test double / mock / fake for the backend, then indeed it will be the case that when you break the code that interacts with the “backend”, your test dashboard will alert you with “BackendTest FAILED”. But goodness me it’s a lot of work to make the code testable like that: work to rethink the code into a testable form, and more importantly cognitive load on the next person who comes along as has to read your now not-so-simple backend fetching logic. The linked article certainly shows how to reduce the impact of making the code testable, but you’d still have to make the mock, which is effort as well as more lines of code to go wrong.

A much better tool in this sort of situation is an integration test. It mirrors what you do as a developer when you first write the code, for starters. And if your real life backend is sufficiently stable such that the integration test is reliable, or can at least be ignored when the backend is in a known-broke state, then an integration test combined with hygienic version control is a much better way of isolating regressions to particular diffs.

There are a lot of assumptions needed to make this work — linear commit history, good testing tools, developers making focused and logical commits which change one thing at a time — but having that kind of infrastructure and culture is far more beneficial to an organization anyway, with the bonus that you don’t need to write complex code that is clouded with test infrastructure wizardry mixed in with the business logic.


> But if you're not using a framework, the obvious thing is to do in a nominally typed language like Java or C++ is the "extract interface" refactoring

I've seen such statements here and there, and it makes me feel I'm missing a whole subculture of our industry. Seriously, are we learning things by IDE operations nowadays? Is selecting text and applying an option from the "automatic refactor" menu a first-class programming concept nowadays?


What you quoted doesn't mention an IDE at all. "extract interface" is just the name of series of steps that you can accomplish with or without an IDE.

Martin Fowler's book 'Refactoring' gives names a whole bunch of these. That might be where the op is coming from.


This is wrong, bad advice, for the same reasons that structural typing is wrong and nominal typing is right. It goes against all OOP and modularity principles. Interfaces should never be declared at the use site because an interface is a contract promised by the implementor.

As for the example, it feels contrived and unprofessional. There should never be a monolithic "Backend" type. Functionality should be separated into services, each one with an interface, and each one injected dynamically for easy testability and upgradeability.


> [this is wrong] because an interface is a contract promised by the implementor.

I think the point the article makes is that you can instead think of an interface as a list of requirements demanded by a user (so your criticism here is a bit of a tautology). The article backs this up by describing concrete advantages of that approach.

Maybe you could share why (or whether) you think these don't apply, or how they contrast with your preferred approach?


It's all in the books. Gang of Four, etc. Learn from good books instead of bloggers who have a type called "Backend" and make brilliant discoveries that DI and nominal typibg are bad despite the whole industry using them successfully.


> It's all in the books. Gang of Four, etc.

No worries, I've read the usual suspects. Taking your example of Gang of Four though (and this applies similarly to the other usual suspects), it definitely does not discuss this observation (and potential trade offs), given it predates common use of structural type systems by far.

I think as professional engineers it behooves us well to prefer actual argument and exposition of engineering tradeoffs over authority ("whole industry does it", "learn from good books").


> I think as professional engineers it behooves us well to prefer actual argument and exposition of engineering tradeoffs over authority ("whole industry does it", "learn from good books").

That really depends on what you want to achieve. e.g. if you want to retain the possibility to fire a whole team and replace it with newcomers, or external contractors, at any point in the development cycle, it is muuuuch safer to stick to what is taught in most schools (which is "tradtional OOP").


Whats the big deal? Structural typing explores the concept that the implementer contract and the user requirement is separate and tries to gain some composability out of that. You still have the public methods as the implementer contract.


> It goes against all OOP and modularity principles.

Sounds good to me, then!


I hate the flavour of OOP present in C++/Java/C# as much as the next man, but the principles are sound. Defining the interface at the user side is like a conflict of interest, like a kangaroo court. It's just bad architecture.


> Defining the interface at the user side is like a conflict of interest, like a kangaroo court. It's just bad architecture.

You're wrong.


I think the bulk of this post is speaking to the interface segregation principle.

Also, I would not use the advice below as a _blanket rule_ as the author puts it.

> interfaces generally belong in the package that uses values of the interface type, not the package that implements those values.

It could be a good rule for many applications, but I find it difficult to swallow as a universal rule. I feel there is so many possibilities where this might not apply, for example perhaps you have a module comprised entirely of interfaces in order to decouple a layer of implementation.


Too bad author forgot to mention the tradeoff.

This pattern violates DRY the hard way. I think most seen example is Go, where stdlib has no "http.Client" interface but only a struct.

So every implementation of http client at some point gets the same copy-pasted code for the interface, so it could use it for that pattern of mocking. I'm not saying it's bad - I'm also doing it. It's just a design decision.


If that interface was in the http package, that would be a problem. Let's say they want to add a new method to http.Client . Would they add that to the interface? If no, then it would be difficult to test code that uses that new method. If yes, then all existing mocking code will stop compiling, because those mocks will no longer implement that interface.


To me the case described in the article is actually one of the rarer uses of interfaces: you're the only one using a handful of methods from a single class which has many more.

If you're using all/most methods of the class, you'd be better off if the interface was defined directly by the class.

If there are many different users of the same subset of methods, it's better if the class designer declares that interface.

If there are multiple classes implementing the same interface, it's almost crucial that the implementers agree on and define the interface.

So I would say, the statement should be 'in general, interfaces belong in the package that implements value of the interface type; but don't forget that sometimes they can be declared by the code that uses them'.


> but I find it difficult to swallow as a universal rule

But the author doesn't claim it's a universal rule, the author claims that the rule applies generally. Good thing to think about the cases it doesn't apply, but the author did put it in a proper way to express this.


It seems the larger point here is that more evolved type systems allow for a better bidirectional decoupling of interfaces and implementations. Rust has a lot of things that make this easier-yet-governable. The commenter ( jstimpfle) who mentioned C is also right - C properly used allows one to do things, since it's not bound by OOP strictures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: