Hacker News new | past | comments | ask | show | jobs | submit login
Don't overuse interfaces (hovland.xyz)
118 points by kiyanwang on May 10, 2017 | hide | past | favorite | 91 comments



Is it common to have an interface on everything? I would have thought you'd just have interfaces where you can have multiple parts fitting the same socket. That's how I do it, and I thought that was common and common sense?


I feel like I see this all the time at work. Honestly only a couple years into working as a software engineer I'm burnt out on people way over engineering stuff, future proofing and seemingly having a need to make as many comments as possible on CRs to improve their craft/defend from the horrors of 'bad code'.


I'm with you. I think partly it comes from education — Java programming books (I can't speak to C#) concentrate on teaching ways to deal with complexity and evolution, but they don't teach you that it's okay to handle simple situations simply. There's an exaggerated fear of what happens if you need to change code that isn't already future-proofed. It's an unbalanced approach. There's no YAGNI, no appraisal of the cost in obfuscation and mental overhead, no discussion of how to accommodate change in code without preemptively engineering for it. It's just, here are all the horrible unexpected things that will happen to your code, and you should have all this design pattern apparatus in place to anticipate every possible change.

As a result, beginning programmers start their careers thinking that good code is filled with inheritance hierarchies and layers of indirection, and they think that getting better as a coder means getting better at adding more and more of those protective layers to ensure their code is prepared for every possible eventuality. Which is insane. It's like sending your kid to school dressed in a raincoat and rubber boots every day when you live in Arizona, instead of looking at the weather forecast (or just guessing) like the other parents. One day every year you'll be right and they'll be wrong, and if can convince people that makes you smart, you have a promising career as an enterprise Java consultant.

I think it all goes back to the overstated fear of change. I think the fear of adapting straightforward code to new requirements must come from another time when code was harder to change, because nothing in my professional experience supports it. According to this fear, the only way to survive is to guess correctly at how your code will need to evolve in the future and preemptively design for it. All of the inheritance hierarchies and layers of indirection must already be in place before you discover that you need them, or else something very bad happens. Therefore the implementation costs and the overhead in maintenance and confusion are gladly paid. How does this make sense? I've never suffered greatly from code that was too simple, too straightforward, not built out enough. I've suffered many times from code built out in a slightly wrong direction that was the best possible guess at the time. The feeling of safety and prudence that people get from preemptively complexifying their code seems like delusion to me.


Another use case where you might just have one implementor is to break circular dependency chains. You can have your module foo that depends on some core, and a module foo-api that depends only on what is needed to represent your interfaces, and now a module bar can depend on the core and foo-api and make use of services provided by foo. Similarly foo could depend on bar-api and make use of bar services, no issues now...


And testing as well. But I think the main idea is the contrast of thinking as interfaces as a asset or a cost. I think of interfaces as a cost you pay to get something better, like testability or circular dependency removal, back. But many developers and tech leads see interfaces as a good in and of itself, an asset that that pays dividends over the course of the project.


I generally fall into the latter group, but if the documentation on the interface says "See FooImpl for more details" then that's surely a red flag :p


I'm super interested in hearing what you think are the advantages to that approach.

The cons I see are there are now two places where documentation needs to be kept in sync.

And reduced readability because of abstraction. For instances if I'm going through the mail chimp code base it's easier to understand how a mailcampaign interacts with the surrounding code base than something like iclickable. If I see a mailcampaign in code I instantly know what domain object it's represents, and have some idea a out what logic to expect in it. Iclickable I have no idea.

Im anxiously awaiting a thoughtful discussion of interfaces.


I'm not sure I understand your example; Does MailCampaign implement IClickable? Surely all the places you're seeing IClickable are places where the important activity somehow involves generalized clicking?

I would say that the subset of "good in and of itself" from interfaces -- ignoring those other benefits like multiple-implementations and extendable/reusable/mockable code -- usually relate to separating the mental task of defining what you wanted to exist versus what you were able to achieve so far.

In some ways an interface is similar to test-driven development, since it allows developers to create a scaffold of requirements and goals (interface methods and comments) before getting bogged down in the implementation details, and iteratively cycle between those two viewpoints.

In contrast, classes without an interface are at a higher risk for following a sort of least-effort evolution, such that someday a developer answers: "Why does it work that way? I dunno, that's just how we got it working."

> The cons I see are there are now two places where documentation needs to be kept in sync.

Ideally the in-file documentation is different though: The interface says "this method will take an X and create a new Y based on it", whereas the implementation communicates "I fulfill the interface's requirements by <stuff that makes me interesting and unique>."


Perhaps you could contribute to this open source project? https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


I had hoped for constructive or thoughtful replies, but yours is just trolling.


True, that was not very constructive, apologies. However, I was comparing your criticism of the previous posters slight exaggeration of the name IClickable with the beauty that is FizzBuzzEnterprise.

Interfaces have many good uses in many situations, but by themselves, with one implementation, is not one of them. (note by themselves. Unit testing, multiple implementations, and so on are reasons beyond in and of themselves). And you can very well do the mental separation by coding without adding an unnecessary interface (unless it has other uses of course).


Indeed, this is the main example of the Dependency Inversion Principle.


> Is it common to have an interface on everything? I would have thought you'd just have interfaces where you can have multiple parts fitting the same socket. That's how I do it, and I thought that was common and common sense?

Combine zealotry with the internet, blog-posts about how you "should always" and you can have yourself a cargo-cult programming-practice of your own choosing.


I sometimes have multiple interfaces and one common implementor, when the same object needs to be passed to different places that need to know about different aspects of its functionality. I.e. one part fitting multiple sockets.

Having 1:1 mapping between an interface and a concrete class seems kind of pointless to me.


I've known people who took the "interfaces are good" to the cargo-cult where they would have a POCO and make an interface for it with getters and setters. No functionality and basically only one possible implementation because it's assumed to be backed by a POCO everywhere. But they "coded to the interface" so it's all good.


In C#, it's just about the only way to unit test aside from making abstract/virtual or using something like TypeMock (paid).

The author suggests using virtual, but if your intent is not to allow child classes to override the implementation, this is certainly not "pure".

Visual Studio & ReSharper makes the code generation (Refactor > Extract Interface) and maintenance upkeep trivial anyway.


You don't need to mock a POCO at all, it's just a collection of values with no functionality. I'm not part of the anti-mocking brigade, but for POCO's you should always just use the real thing.


POCOs and functionality are orthogonal to one another.

Plain Old Class Object means no special inheritance chains or attributes, just a "plain old" class.

By way of example: this contrasts with old Entity Framework or other ORMs that made you inherit from a certain kind of class, breaking the objects original intent, and hurting you when it came time to port to another framework, or serialize, or maintain your own class hierarchy.

Even what DDDers would call a Value Object could and should still have relevant business logic attached to them. For test-ability and portability you don't want your classes married to any particular framework, but your objects are still responsible for hiding relevant information.


Unless you use a pattern like SimpleMock! http://deliberate-software.com/simplemock-unit-test-mocking/


If your code is a library that will be used by other people, using interfaces allows them to provide their own part for any socket even if your library only has one implementation.

It is often used by test code, for example, that wants to use a fake version of something the library provides.


> It is often used by test code, for example, that wants to use a fake version of something the library provides.

Which, by the way, is generally preferable to mocking.


It is something that people people who hear about unit testing and discover dependency injection often do. Much test advice is around testing small units of code, and with dependency injection you inject a mocked interface which must be good, right?


I hardly think you can blame inexperience when people have been treating "program to an interface, not an implementation" as axiomatic for years.


I can blame inexperience for people not understanding the interface in the sentence is nothing to do with the interface keyword in their programming language.


Unfortunately in C# you can only Mock Abstract/Virtual methods and Interfaces.

It's pretty annoying TBQH.


I agree fully from a purists PoV: pure OO means I get to redefine anything I want when I inherit, a la Smalltalk.

In practice though I like to think that the re-use of my objects and class hierarchies is a critical part of their design. Just about anything I'm gonna have to override to make a simple mock will also be an issue for other implementers. So why not use the opportunity to abstract and refine the objects internal logic to make mocking the relevant functionality cleaner and more obvious? Create an internal "OnProcessComplete" method which can be overridden easily instead of mocking the whole "Process" command.

Over-mocking is a major code smell and productivity thief. I find that having to manually set it up keeps me honest about cost/benefit, while also looking at an important design aspect that otherwise could be ignored.


TypeMock Isolator[0] and JustMock[1] can mock nonvirtual methods and static methods. Fody Ionad[2] is a free tool that can replace static methods. The reason that most mocking framework don't support this is that the implementation is complex[3].

[0] https://www.typemock.com/isolator

[1] http://www.telerik.com/products/mocking.aspx

[2] https://github.com/Fody/Ionad

[3] http://mattwarren.org/2014/08/14/how-to-mock-sealed-classes-...


I can't see anything changing until the OSS frameworks have similar support. For that matter, MS should take the lead for something that could provide across the board performance improvements on their ecosystem. They don't seem to care about tooling to improve their own platforms.


I don't know C#, but I know in C++ I have been burned because I mocked a real class - but I didn't override some function and so my test ended up using a mix between the real class and the mocks leading to code that wasn't testing what I wanted it to. Often the tests pass, but there is a lurking bug.


SimpleMock is a pattern for unit test mocking that uses no interfaces or libraries! It's simple and works very well. http://deliberate-software.com/simplemock-unit-test-mocking/


Sometimes "where you can have multiple parts fitting the same socket" isn't quite as you initially imagined it.


It is common sense, but it's not as common as you think. I've had multiple tech leads ask me to wrap a class in an interface to "abstract" it.


It is not hard, in this field, to find people who take it as axiomatic that if X is a good idea, then only X all the time is the only sensible way to go.


I suppose this is the money quote:

> Please note that I’m not saying that interfaces are always bad. When they add value, they are useful. I’m only saying that interfaces that mirror one and only one class implementation is waste.

That's not really controversial or interesting.

"Don't write interfaces that have only one implementation" is something I've heard many times. The only well-accepted exception to that rule is interfaces that are expected to be implemented by, say, users of a library you are writing.


I don't find it controversial, and it isn't altogether interesting, but I would like it if it were better communicated. I've cleaned up code that dogmatically followed a interface-per-class policy.


It's quite controversial in the real world where "each service class has an interface" is official codified dogma.

OTOH in certain specific cases I see the value of the interface even for just one class when I want to clearly separate the contract (which I'm bound to fulfill now and in the future) and the implementation specific comments.

Let's say that in my little pet programming language I have just one List type. I want to clearly separate what's the contract (indexed ordered sequence) as opposed to "incidental" implementation details (constant cost random index access).


I have worked at shops where, without fail,

  EveryClass : IEveryClass


Because, of course, you might expand it later.

Sure, I'm adding this shitty console based logger, but since it's wrapped in ILogger I'll later come and write a FileBasedLogger, a DBLogger and a KafkaProducerLogger for when we go webscale.

Except, that never happens.


  //Names changed slightly to protect the wicked
  public interface IProvideBoolean {
    Boolean True { get; }
    Boolean False { get; }
  }

  public class BooleanProvider : IProvideBoolean {
    public Boolean True => true;
    public Boolean False => false;
  }
So.... SOLID.... If there's ever a third value for Boolean, I Will Be Ready.



Finally ready for trinary computing!


Then you go back and refactor the code to pull out the interface! I so hate interface abuse.


If you use names like Logger instead of ILogger, you don't even have to change all the names when you refactor to an interface! If you end up wanting a FileBasedLogger, you just rename the current implementation to StdoutLogger, create a Logger interface, create a FileBasedLogger, and change instantiation sites to build the instance you want.


> If you use names like Logger instead of ILogger, you don't even have to change all the names when you refactor to an interface!

But you want to, that's the point. You're making a drastic change to your code base, surely you want to inspect every place where that type is used.


Using IClass naming is considered best practice in the .Net world AFAIK. (I think Visual Studio and/or Resharper hints very clearly about this.)


> Using IClass naming is considered best practice in the .Net world AFAIK.

No, it's the opposite. IClass is the norm, for legacy reasons going all the way back to the early COM days (it all started with IUnknown[1]).

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...


I know, I'm demonstrating one reason I think that's a bad best practice :) I recognize there's nothing most people can do about that, though!


There are some (rare) occasions, when private members drag new dependencies that you don't want the user of your class to know about.

You can also do "pImpl", which is similar, if not worse, in terms of boilerplate, but doesn't allow DI.

Just don't use these techniques unless you can prove they're needed (e.g private members drag new problematic dependencies, #include <some_volatile_header_file.h>).


The rule of three works out pretty well. If you don't have three of something then you don't really have a group you can attribute a set of behaviors to. If and when a third one does present itself, you usually have to rework things because you were very wrong about the way you organized things.


> That's not really controversial or interesting.

It is to me. So far it seems that everyone takes this for granted. I disagree: I have yet to see a problem created by the "each class has an interface" rule, and I have seen countless problems created by classes without one.

So: what problems are caused by interfaces with only one implementation?


In this context, I see interfaces as mostly a psychological exercise. In that sense, it can have some value.

Some people think nothing of adding just one more method to a class. Sure, it's unrelated, which they (may) realize on some level, but for the sake of expediency, it's going in the class. Some of those same people would pause and think a minute before putting the same method into an interface. They wouldn't feel it belongs there.

Sure, in an ideal world, everyone is 100% rational and 100% enlightened and they realize it's really the same thing. But in the real world, people are a bit irrational at times. People often feel that $99.99 is a lot less money than $100.00, for example.

The point is, having to include it in an interface can be an occasion for reflection on whether it really belongs there. It's not unlike how when you move, you go through your belongings, and that triggers a process of justifying to yourself whether you really still need that item. In theory, there is no reason why you can't do that at any other time. In practice, it serves as an occasion that encourages the process to actually happen.

That said, if you don't through this process of examination, and you just create a Foo and think "I'm supposed to have an IFoo" and then just mechanically copy everything over, then you're not getting any value out of that.


The bigger point is that all programming is meta-programming in the sense that input traces different paths through your concrete statements based on context - which is nothing more than a fancy word for reduction over previous inputs (and it's useful to consider "program statements" to be inputs). A concrete input requires a concrete set of statements to process, selected out of the universe of possible statements. In the OOP style we have a specific way to organize those statements, which defines the "coordinate system" of our combinatorial problem.

Strongly-typed languages like Java or C# really obscure this simple truth, in my opinion. What the OP is talking about is one specific way that it can be obscured. Which is certainly true as far as it goes. But it's not going far enough!

The real problem is that OOP is totally under-constrained for end-user applications. (Although it's perfectly adequate, I think, for modeling formats and protocols).

The only really good way to do applications is as dynamic functions, because there is self-similarity between your program statements and the way you organize them, making your combinatorial problem far more straight-forward. Plus, if you do it right, your function set can fit into on-die cache, and you know enough about your inputs so that you can allocate a fixed, small memory space for any combination of statements are required for a given input (or gracefully error out when your assumptions about input are violated - ideally without even unloading your code).


I think you over-estimate the share of combinatorial problems in software.

There’re a lot of them indeed. But outside niches like compilation and text parsing, they are far from majority.

Some problems are bandwidth-bound, like photoshop: if you have 1GB image, you have to process the complete data set no matter the combinatory.

Other problems are IO bound, like databases: if user wrote a query that need to do sequential scan over a table, you’ll have do to that I/O.

For both classes of problems, OOP is just fine.

Another thing is, OOP is better at explicitly managing the state. If you design an app your way, viewing your code as reduction over previous inputs, I expect you’ll struggle implementing state management functionality like undo/redo stacks, and saving/loading your stuff in these formats and protocols.


The combinatorial problem is universal. In OOP we organize our processes using objects. Classes (collections of methods, which are collections of statements) are runtime-invariant.

In the steady-state, your process handles input as a method invocation (the main(String[]) of a CLI program or the handleRequest(req) method of a servlet, for example). In both cases, the thread associated with the input travels through your code, going deeper, then shallower, deeper again (the depth is the stack frame count).

Each time we go deeper (each time we conditionally invoke a method) we are solving a combinatorial problem across our own code-base. We are effectively crafting, at runtime, a unique program (concrete list of statements) to deal with this input. We don't normally learn OOP this way, or think of it this way, but it really is what we are doing. But there are some benefits to thinking in this way.

For example, what is an interface? Like all indirection, it's like lubricant. It decouples one set of statements from another, and makes it easier to substitute (or slide) out the second set. And a factory? Well, it produces bundles of statements that conform to an interface. But since we are strongly-typed, the factory is only selecting from a finite set. (In fact, the factory pattern is a perfect example of how OOP practitioners are doing combinatorics all the time, but we don't know it, and we do it crudely.)


> In OOP we organize our processes using objects

We organize our state using objects.

Many programs have to deal with complex mutable state. The extreme cases are OS kernel state, or game state in an AAA videogame. For such programs, OOP is the only approach allowing people to reason about what’s going on inside these things.

If you don’t have complex mutable state, other approaches might become more practical. One example is compilers; their state is mostly immutable. Another is pure computations without state, especially symbolic computations, like what happens in Maple.

> the thread associated with the input travels through your code, going deeper, then shallower, deeper again

In many programs, input handling is minor part of overall software complexity. Sometimes, most of the code don’t even run on the input thread.

> decouples one set of statements from another, and makes it easier to substitute (or slide) out the second set.

Interfaces abstract away arbitrary stuff (statements, data, hardware devices, etc.), providing strongly-typed API to use that stuff, and hiding complexity of the implementation. In some case (COM interfaces), the statements you’re selecting were written in another language, are implemented in another DLL being loaded on demand, run in different process that starts up on demand, or for DCOM even run on different machine.

> since we are strongly-typed, the factory is only selecting from a finite set

In C#, some class factories take your strongly-typed interface, take other parameters you pass to the factory, and implement the interface in runtime. Because there is infinite set of possible class-factory parameters, that makes the set of statements infinite as well. BTW, this approach is often used in .NET RPC libraries, like WCF.


If you opened your mind only a little further you would see what I am saying, friend. All call chains are input driven. Attach a debugger to anything you've made and stop the program - at the top, is an input. The tick() methods in a AAA game are handling input - the input of a ticking clock. A long-running process, by definition, does not change state without input. "Process organization" means how the memory associated with the (operating system) process is divided - and that memory includes both (mutable, transient) object state and (immutable, invariant) program code. Your comments about code generation are pedantic, and I will ignore them, since only a very small fraction of code written in Java, C#, or any other high-level OOL does dynamic code generation, or even reflection (and for the very good reason that it breaks the invariants assured by the OOP paradigm).

Unfortunately, OOP doesn't allow people to reason about complex systems, primarily because call-chains are totally unconstrained. An object can contain references to any other object held by the process, and this reference list is dynamic at runtime. In the presence of instantiation indirection it's even worse, even the concrete type of those references is unknown except at runtime.

There are code smells, and then there are architecture smells. One big architecture smell is when the incidental complexity of a solution vastly outstrips the intrinsic complexity of the problem. I have never (and I've been doing this a very long time) seen an OOP code base less than 10x the complexity of the business problem it was built to solve. It's sad to me that instead of asking why, they keep blaming the programmer/architect and inventing new complexity to mitigate old complexity.


> only a very small fraction of code written in Java, C#, or any other high-level OOL does dynamic code generation, or even reflection

The overall fraction might be small, however vast majority of software still contain substantial amount of code that does. Most serialization and DB access libraries do. For C#, this includes all framework-provided serializers (XML, data contract, etc.), the ubiquitous Json.NET, and my own one: https://github.com/Const-me/EsentSerialize

> OOP doesn't allow people to reason about complex systems

Respectfully disagree, ‘coz I have never saw complex systems that aren’t OOP. Even most compilers are OOP, take a look: https://github.com/llvm-mirror/llvm https://github.com/gcc-mirror/gcc/tree/master/gcc https://github.com/dotnet/roslyn

> An object can contain references to any other object held by the process

This is true for very small class of objects, like GUI controls with loosely typed event handlers, or objects that hold lambdas/functions that may capture anything at all (but the latter is not strictly OOP).

An object can only reference other objects of types that can be stored in its properties/fields. For many objects especially simple lower-level ones, this means objects can reference no other objects at all.

> have never seen an OOP code base less than 10x the complexity of the business problem it was built to solve

My guess is, you’re systematically underestimating complexity of the business problems solved by the code you look at.


Most of my career has been spent building stateless web applications over relational databases in Java. About half enterprise, and half consumer facing internet. Sometimes using Hibernate, sometimes using JDBI, sometimes leaning heavily on SPs. Sometimes generating HTML, and sometimes exposing REST-like API. The various runtime accoutrements of this architecture, like caches, queues, health-checks, proxies, linux, etc are quite familiar to me. My time as a freelancer gave me particularly broad appreciation of real world solutions.

In every case, codebase complexity was dominated by administrative overhead. My passing experience with apps written in C#, ASP, and PHP, tell me that it is the same story there, too.

This is my subjective view. One way you might understand my perspective (and perhaps even share it, to some degree) is to consider the act of adding a column to a database table, and how a typical OOP/RDB webapp codebase must change to incorporate a new DB value. (And this is only the tip of the iceberg, since usually adding a feature requires a view, an index or two, plus column and table changes.)


I’m a full-time developer since 2000. Over those 17 years, I only briefly did web development, Java or relational databases. Mostly I’ve been doing system programming, game development, embedded development, robotics. When I did networking, it was usually lower level than REST. I have minimal experience with proxies and linux, I’m mostly a Windows developer. Lately I’m working on Windows desktop software, the area is CAD/CAM.

As you see, almost nothing common with your background.

For the kind of software I work on, if the code is 10x more complex than necessary, the software might fail the performance targets. Complex code is also prohibitively harder to maintain and debug. And esp.in embedded, quality requirements are just higher: web requests come and go, but embedded software needs to reliably work for days without restarts.

I think the reason why I’m generally OK with other people’s OOP-style code I see in the projects I work on, is that code is just better then what’s in an average web.app. Especially if the web app ain’t Google or Facebook, AFAIK for these guys, software quality translates to substantial saving on electric bills.


Do you have any examples of an application which is written this way (as dynamic functions?)


Most compilers :)


>Do you use Dependency Injection? Of course you do, you’re a responsible programmer, and you care about clean, maintainable, SOLID code with low coupling.

If you can overcome this cringeworthy intro about DI, the rest of the article is not as bad as this.


Thanks, because of your comment I'll take a second look.


Didn't think that was cringy, It's kinda funny and hits the nail on the head


Because "Dependency Injection" is "of course" recognized as necessary for "responsible programmers", who "care about clean, maintainable, SOLID code with low coupling"?


I find interfaces are both over and under used. As others have noted, they are overused for things like mocking where we should be trying to create better alternatives (like TypeMock), but they are under used at the method level. I always come across code where the best solution is a function like:

void DoFoo(IInterface bar)

or

T DoFoo(T bar) where T : IInterface

But what I find instead is the same functionality packed into an elaborate and delicate inheritance hierarchy. It's like an object implementing multiple interfaces is still alien to most developers.


Rust (and Go) present an interesting challenge to people who have cargo-culted object-orientation. In Rust, data is dumb, and traits (aka 'interfaces') provide meaning. So you can go polymorphic with 'trait objects' or monomorphic with trait constraints on generics (i.e don't have to pay for virtual dispatch when it matters). The cool thing is that both languages allow interfaces to be _added_ in consuming code, they're not part of the definition as in Java.


Yeah I think rust did a really good job of avoiding the worst OO features but keeping the most useful parts. Combining data and logic was the most damaging idea the programming world has had. Once they're separated the code is easier to read, easier to test and easier modify.


Exactly. For example, in C#, it is still customary to use List<T> everywhere, although what the developer really needs is just IEnumerable<T> (which also guarantees that the collection will not be mutated).


> so you follow ReSharper’s advice and do this (replace Queue<T> with IEnumerable<T>)

Actually, resharper does a disservice sometimes. Most classes have their own methods, which are optimized for that container.

It once bit me in the arse really hard: after following resharper’s advice I got around 3x slowdown of the algo overall (that IEnumerable was in a tight loop, which was slowed down really hard).


That surprises me.

If you were calling a method that didn't exist in IEnumerable, why would resharper suggest changing the type to IEnumerable?


Arguably, you might want to specify in some cases. For instance, both List<T> and HashSet<T> implement ICollection<T>.Contains, so code written to the interface will work with either. On the other hand, the HashSet implementation runs in constant time and the list implementation runs in linear time. Depending on what you're doing that might make things a lot slower. So you could certainly make an argument that you should just force the caller to pass in a HashSet in the first place, especially if the method is protected or private.


The method exists in both IEnumerable<T> and Collection<T> (I do not remember exact class, but it was something simple like List`1).

The difference is when you use IEnumerable<T> user method (provided as extension, using IEnum) has to call IEnumerator GetEnumerator() and allocate it on a heap. But that collection have its own implementation which works faster than generic one.


Yeah I believe you are talking about enumerating over the interface versus the concrete class?

(For others) Most of .NET frameworks generic collections implement the enumerator as a struct which doesnt get allocated on the heap and subsequently garbage collected, but the explicit interfaces implementations use a class which does.


Exactly.

The actual suggestions are based on Microsoft Guidelines :-

https://msdn.microsoft.com/en-us/library/dn169389%28v=vs.110...


I suspect it's more to do with the optimisation of the method. For example, calling List.Count in a tight loop is much faster than IEnumerable. Count.

I don't use resharper so don't know if it watches out for this, but no doubt there are plenty of other examples.


This guy's at Dependency Injection revelations Stage 2.

In Stage 1 he realised that a bad understanding of DI has resulted in a ton of pointless Interfaces.

At Stage 2 he realised he didn't need those interfaces to unit test.

Wait until he reaches stage 3 realises that DI itself is fairly pointless and bad coders are still going to make a massively coupled mess even if you force them to use DI.

For example, one of our controllers has 20 odd "services" injected, with each of those services then having a bunch of other services/Dals/etc. injected, causing a massively coupled dependencies. So much for DI cleaning your code.

DI code is no less coupled than if you just coded it cleanly in the first place.

The only good thing I've found it for in C# is that it passes the EF context around per request without you having to manage that, so all your services are using the same context.


> bad coders are still going to make a massively coupled mess even if you force them to use DI.

DI isn't mean to be silver bullet, and bad coders are going to make a mess, whatever technique you try to impose on them.


Often I've see interfaces overused for unit testing in C# and Java. To improve the situation my current team came up with SimpleMock[1], a pattern that allows for behavior replacement without interfaces or a complex black box mocking library. It works in any language with lambdas and mutable fields: so just about anything.

[1] http://deliberate-software.com/simplemock-unit-test-mocking/


The source of his post seems to be a use of interfaces in DDD and DI. I also use a lot of interfaces there together with generic interfaces and base classes.

The author did forget, that all interfaces can be DI'd when the class is an implementation of IService ( or IRepository), so you don't have to specify all classes seperately.

This should make it more obvious: http://simpleinjector.readthedocs.io/en/latest/advanced.html...

And

http://stackoverflow.com/questions/15581505/generic-abstract...

For the rest, I agree to use interfaces when appropriate. But I'm not sure this post adds a lot of value to be on HN. Perhaps it's because I'm a c# developer and this should be basic knowledge in that domain ( eg. Books and Stack overflow). Can someone elaborate on that?


Disadvantages to this approach is everything has to be virtual and you can't use decorators.


I try to use interfaces that have single methods only.

The problem with multiple methods is that you can abstract away the function call, but you can't ensure the caller is the methods in correct order (if that matters, but it often does).


The caller should be able to call the methods in any order, it sounds like you have some horrible stateful code.


You are wrong. Most interfaces in non-functional languages are stateful.

Take the JDBC-interface: It's certainly not possible to call setTransactionIsolation(), executeQuery(), commit(), rollback() etc. in any order.

Or list: add, remove, set, clear etc.

It's still an interface.


That is a stateful interface I'd consider bad. That's mostly because of the limitations of java though. Opening a transaction with a function argument and a commit/rollback return would have been much better.


What about my list example?


Making methods that you want to be able to mock virtual prevents the compiler from inlining them which can impact performance. I'm sure if that's advisable.


This is good advice in general, but not always practical.

I'm writing an Android app using Kotlin (where everything is final by default).

I use mocking extensively when I test, and I had to choose between making an interface for [almost] everything, or explicitly opening everything. I went for the former.

I know mockito has tools now to deal with final classes, but the last time I tried it, it wasn't working so well (at all) for Android.


C# is in a similar position, classes aren't final by default but their methods are.


yes,I agree that.Low coupling and high cohesion is the gold rule.


This is fairly dangerous advice and amounts to. "Hey folks, I think it's a great idea to friction weld parts of your code together"

It's not YAGNI or duplication, if you can't understand why we program to an interface and not an implementation then you just very seriously need to go look at your codebase and decide to extend a core bit of functionality, one that usually comes to mind is Authentication/Entitlement




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: