Hacker News new | past | comments | ask | show | jobs | submit login
Interfaces – The Most Important Software Engineering Concept (robertelder.org)
157 points by nkurz on Feb 26, 2016 | hide | past | favorite | 42 comments



When i think of interfaces, i recall being told to think about how and why a DVD works.

The DVD would have been manufactured by a factory somewhere in china, by a bunch of people who does not know you. And yet, the DVD fits perfectly, on a drive that you bought, probably manufactured in the United States, by people who does not know you, nor the DVD manufacturer.

The software to read the DVD is written by yet another set of people, in yet another country, all of whom would not have talked to any parties mentioned above. And yet, the DVD is read perfectly, with error correction to adjust for any minor scratches.

The DVD plays video and audio data, created by people (probably in hollywood) who also have not met any of the parties above. And yet, it reproduces the sound and audio completely correctly as the original directors/composers intended.

And all of this, costs less than 50c a piece, with a drive that could be bought for less than $30.

That is the power of interfaces.


Power of capitalism too.


I have been skeptical of the notion of "interface" lately.

First of all, there is comment from Erik Meijer who said something like "interface without laws is useless" (he was referring to "interface" as a concept in OOP languages, and by "laws" he meant some algebraic laws). Basically what he wanted to know was the algebra, not just interface.

And if you look at it from Curry-Howard correspondence perspective, where programs are proofs and types are theorems, then basically, since type is an interface, you could say that theorems are interfaces in mathematics. So there already is a very precise notion of what is interface.

On the other hand, I also kinda like the notion of DSL as an interface, as described in the recent article here on HN: http://degoes.net/articles/modern-fp/

What seems to be the main contention here - should the interface just use the names (akin to philosophical nominalism) and leave them open to interpretation or should it somehow encode the properties of things it describes (akin to philosophical realism)?


Indeed, this quickly becomes obvious after writing Haskell for awhile. Lawless typeclasses just seem clunky and get less reuse compared to their lawyer'd up brethren.

You also can get a very slight taste of this in C#, Java, etc as well where larger interfaces seem clunky and get less reuse than smaller interfaces. In C#, if an interface has some nice properties, typically the extension methods on these interfaces with allow for a combinational explosion of generic utility functions. So this might be one way to judge the "algebraicness" of interfaces. You see this a lot of the LINQ collections libraries, which seem to have put some thought into laws.

Unfortunately in these languages you can only go so far due to the lack of higher kinded polymorphism (T<A> types vs just Type<A> types).


One thing that baffles me a little bit about LINQ in .NET is that it's got a dual identity.

On one hand you've got the IEnumerable<T> interface, which supports the fluent syntax and is extremely easy to riff on with extension methods. (The "fluent syntax" itself is just a bunch of extension methods.)

On the other hand you've got the mechanisms you need to support the query syntax, which behave more like duck typing. If you implement a couple methods with certain names and signatures, not all of which are formalized by an interface, then the compiler will let you use query syntax with objects of that type. It also turns out that the stuff you need to implement are roughly the bind and return operations from a monad. . . but not exactly. In particular, the element they use in place of bind is a bit more complicated, and not in a way that adds any real expressive power. It just makes the signature a bit more irritating to support.

In other news, re: lack of higher kinded polymorphism - That's something that the maintainers of F# have forcefully resisted adding to the language. The argument, which I've yet to take the time to look into deeply, is that typeclasses interact poorly with formal interfaces so you shouldn't allow a language to have both. I'm a bit skepty on that one, though, since I'd assume there would be wailing coming from the Scala community if it were really that bad.


Im guessing F# also may want to limit constructs which cannot interop with C# . Not sure if Scala's HKTs are usable from Java


Maybe, but I'm inclined to guess that it could probably be made workable without much more pain than what it takes to interact with F#'s curried functions from C#.

Usually the recommended approach for making F# code usable from C# is to add an object-oriented wrapper layer to the F# library. That ends up being much, much easier in practice than trying to deal with F# code on its own terms from C#.


"... add an object-oriented wrapper layer to the F# library."

Do you have any sample code that serves as an example? It would be immensely helpful. Thanks.


Could you please give an example for someone not up-to-date on Haskell?


Abstract types have existential type. ML/Coq modules are just existentially-typed "package" data. Haskell type-classes are just another variation on the same concept, with only one instantiation of the type-class allowed at each internal type so the compiler can search for instances coherently.

http://dl.acm.org/citation.cfm?id=45065&dl=ACM&coll=DL&CFID=...

Which reminds me, I should finish reading Tomer Ullman's PhD thesis to see if he managed to integrate existential types into his theory of causal-role concepts and theory formation.


I somehow wanted to say that. Interface are only the surface of an algebra. Having an iterator interface doesn't mean a lot, but when you study linearly recursive structures you somehow understand the car / cdr, next / hasNext relationship. Otherwise it looks very awkward and shallow.

That said, even without this, Interfaces forces to think toward minimal assumptions which is a very important property.


Hi, I'm the author. I submitted this yesterday, but I didn't even get 1 upvote, so I'm glad to see it here today as I think it is a very important topic for discussion. Feedback from the submission on Reddit has suggested that this article is too long which I now acknowledge. I'd be happy to hear any critiques you have.


Good article. Interfaces and modularity are core concepts, worthy of much attention. Especially questions like how to do functional decomposition, finding the right abstractions, and good interface design.

I'll chew on your statements about the success of Python. Though my first love was LISP, I'm now far more comfortable leaning on static typing and composition.

---

The best book on software design I've ever read was written by two economists.

Design Rules: The Power of Modularity

http://www.amazon.com/Design-Rules-Vol-Power-Modularity/dp/0...

This book didn't change how I program so much as changed how I think. Like the difference between making and criticizing art. Whereas SICP gave me new mental models, Design Rules gave me new philosophies. More like Design of Everyday Things did.


Thanks for the link. Let me repay you with one:

http://www.amazon.com/Design-Essays-Computer-Scientist/dp/02...


I read it and didn't find it to be too long, but the sections on copyright and patent seemed out of place. It was a pleasant read and brings up important concepts for consideration.

Reminded me of this https://www.youtube.com/watch?v=5tg1ONG18H8


An interface is the intersection between the system and the environment.

An implementation is the system minus the interface

These definitions seem at odds with each other. The first sentence would imply that the interface is part of the implementation, but the second says that the implementation and the interface are exclusive. What do you feel you gain by defining things in this way, as opposed to saying something like "the interface is the portion of the implementation with which the system interacts"?


Yes, this was something that bugged me as well. In the article, I touched on this by noting that it makes sense to also think of the "contract" definition of an interface. Meaning that the "contract" is really just a set of intentions or guarantees, so it doesn't really 'take up any space'. It would almost seem appropriate to use infinitesimals from calculus when trying to measure how much of the system is composed of the interface. A good interface should be clearly visible from the outside, but it should be as lean as possible to avoid overconstraining what the system can do.


Nice! I haven't read it yet, but just from the title it's made me think that interfaces are at the very least among the most important things I've learned in the last five years.

Dependency injection probably competes for the top spot (though interfaces work really well there too)

Anyways, looking forward to reading it


Why does the line "add_numbers 3 4 = 7" qualify as part of the interface for the haskell function, while the interface part for c/python is taken to include only the name/arity/type-signature of the function?


Awesome write up.

While I agree that dynamically typed languages -- by their loosely defined API requirements -- are more difficult to scale, it's not difficult to add type checking and/or provide a well structured public API where necessary.

I think statically typed languages go too far in the other direction. Tyr type and API definitions are too strict leading in a ton of unnecessary effort (ie boilerplate), increased surface area for potential bugs, and overly restrictive limits that require 'creative' workarounds to effectively write code.

I think there's a 'happy medium' to be found where type checks are required for certain inputs and a clearly defined API can be established without the need for private/internal/public syntax artifacts.

I've been playing with this a bit in JS lately. Using model definitions to specify the structure and enforce validation. As well as defining facades with the ES6 module import/export syntax to define public APIs. Finer grained control (ie private vs internal) can be defined using closures that provide internal interactions while hiding the private implementation details.


> I think statically typed languages go too far in the other direction. Tyr type and API definitions are too strict leading in a ton of unnecessary effort (ie boilerplate), increased surface area for potential bugs, and overly restrictive limits that require 'creative' workarounds to effectively write code.

Given this description of statically typed languages, I'm guessing you haven't tried OCaml or Haskell.


Whilst I've found Hindley-Milner systems much nicer to use than, e.g. Java, they can still require boilerplate and "creative" workaround which wouldn't be necessary in dynamically typed languages.

For example, I've recently been struggling with a Haskell library that makes heavy use of `TypeRep` values, which is fine when everything's done in a single process invocation, but very awkward when attempting to serialise/deserialise (e.g. to pass data between processes, or to suspend a computation and resume it later)


I was specifically referring to statically typed OOP.

You're right, I haven't tried OCaml or Haskell so I can't make a qualitative judgement on how easy/hard type coercion is in either.


The need for type coercion is a very good indicator of a bad design. Haskell just forces you to deal with it right now instead of never.


Hi Robert,

I'm not sure what you'd like to see discussed from the article. To me, it read like an overview of something I already knew.

It's one of those things where if you know it, you don't need to hear about it. If you don't know it, hearing about it won't make sense anyway.

I can see this being part of a course for beginners but frankly - people will figure out what interfaces are for and what leads to problems by writing enough code and/or seeing how good libraries/frameworks tackle the problems they had difficulty with and learn from that.

My 2 cents.


I suppose that is re-assuring to hear. I expected that a few of the points might be controversial because I haven't heard others explicitly state them. I wasn't sure if they were just my opinion or something that everyone else thinks too.


Polymorphism is very powerful, because it lets old code call new code.

But abstraction layers have costs. (Explicitly declared) interfaces are a cost (time to introduce into the system, complexity, lines of code, impedes refactoring).

Therefore, don't use interfaces (i.e. introduce abstraction layers) until needed (cost is justified).

By the way, if you manage to come up with a layer of abstraction that doesn't have as much explicit cost, it's a cheaper layer of abstraction. So dynamic languages' "duck typing" feature allows interfaces to emerge gradually, without explicit code to introduce them. Arguably, but IMO, that's a better approach.

I disagree with the article's definition of a leaky interface: "A Leaky interface exists when the interface is prone to being ignored during any communication between the system and the environment."

A leaky interface is an interface that has non-obvious, non-declared behaviors, and the code has to rely on those behaviors to deliver value/functionality.

It can be something as simple as an inconsistent implementation that leaks an underlying implementation detail. It doesn't mean that the interface is prone to be ignored. In fact, it can be used quite a lot--just in slightly different ways.


The most important software engineering concept is indirection. Interfaces are an expression of this concept.

[p.s. edit: structural indirection, to be precise.]


This article would have helped tremendously back when I was struggling to wrap my head around the purpose of interfaces. Great visualizations and certainly a very unique but also a very logical way of looking at interfaces. Great article.


I think coupling and cohesion stand out as fundamental software design principles. (I wouldn't say it's "interfaces" as I can imagine interfaces with tight coupling or low cohesion.)


"Coupling" I understand, as sort of opposed to "modularity". What do you mean by "cohesion" in this context?


The ideal is low coupling and high cohesion. That's supposed to mean your system is composed of parts that can be understood separately. Low coupling means that the innards of each module are isolated from the others. High cohesion means that each module presents a clear and distinct purpose.

Thinking in terms of interfaces (in the general sense) helps towards this ideal. If an interface is cluttered with dozens of functions, it's a sign that it could be refactored into more cohesive units.


> Low coupling means that the innards of each module are isolated from the others.

Isn't coupling just the complexity of the dependency graph? I would say depending on an interface is coupling as well. In my experience complex static typesystems encourage highly abstracted interfaces which are sometimes even more cancerous dependencies (that doesn't apply for really basic and practical things like Iterables which come with the language).

For this reason, IMHO an even better ideal than coding against interfaces is relying on concrete datatypes that are just a given, like structures and arrays of integers.


Maybe my understanding is idiosyncratic, but I see low coupling as most essentially meaning that the modules interact at small and well-defined boundaries, which gives you flexibility when you want to do things with the units like test them, replace them, etc.

Depending on an interface can be a kind of coupling, sure, but I think it depends. I think abstract interfaces, or at least statically typed data types, can be great for understandable systems...

In Haskell, there's work being done on allowing some kind of module signature, so that you can have several implementations of the same API be compatible with the same source code. I'm interested to see how that will play out.

You could see a Java-style interface as a specific global name given to a set of method signatures. Using that name incurs a dependency on the library that defines it. Sometimes it makes sense to redefine the interface in your own library, and then you can use adapters to make other implementations compatible—but this is all kind of boilerplate stuff that you wouldn't need if Java was more flexible with interfaces...


Thanks. I think I confused low coupling with loose coupling.

Where do I find information about the Haskell module thing?


I'm not totally clear about the concept either. :)

It's called Backpack.



That the code that is stuck together forms a single unified whole instead of representing a mish-mash of unrelated ideas.


Opinion: If interfaces is the most important software engineering concept, then the most important interface is the one used to control I/O. Different approaches. Fun to think about.


Programmer interfaces surely will look different from interfaces consumed y the general public.


As far as the S/W engineering take away -- simply put, an interface allows two disparate pieces of code to work together. One compiled piece of code, for example, can interact with another piece of code, which could be written and compiled years after the initial.


Thanks for posting!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: