One powerful feature I don't see being used very often is simulating typeclasses via contraints on generic types:
public static IEnumerable<CalculatedTax> CalculateTaxes<T>(this T t)
where T : IOrderItems, IAddress {}
This adds an extension method to any type that implements those interfaces. Using this technique I have been able to make my interfaces much smaller and have generic functions which apply to many different types.
Here's what it would might look like in Haskell:
calculateTaxes :: (OrderItems a, Address a) => a -> [CalculatedTax]
To be fair, this doesn't really simulate type classes, it's just allowing you to specify in a method signature that a particular type implements more than one interface. To simulate type classes, you would have to have the ability to declare statically that an existing type, potentially one you have no control over, implements an interface, and to then have the static extension methods apply to that type automatically. This is in effect what Scala's implicits do that allows them to simulate type classes.
Indeed, it really only simulates the function type constraint aspect of Haskell typeclasses. Fortunately most of the time I have direct access to the implementation, and can throw an interface on a type when I need it. This is effectively the expression problem. You can always solve it by using a wrapper class, although it's a bit inconvenient.
Just have to include a using statement for the namespace it's in, it's an extension method. Look them up if you're unsure.
I didn't know you could do that with them, it's pretty cool. Although it's kinda what abstract classes and inheritance is for, but you can only inherit from one in C#.
Here's a full example in some production code. I ended up doing it this way to simplify some crazy legacy code. There were a bunch of different classes that provide order items and extended prices in different ways. With this setup as long as those types implement the necessary interfaces, they get those extension methods for free.
I would hope most C# developers are aware of most of these features and how they facilitate the writing of good C# code, but this article seems like a good jumping off point for gaining greater understanding of the nuances of C#.
A better title to this article would have been "Less Understood Features of C#," as things like nullable types, boxing and unboxing using as/is, readonly variables, Nullable<T>, type inferrence and most of the rest of the articles content should not be hidden to anyone who intends to have more than passing knowledge of the language. Most of the things mentioned by the post benefit from having the additional explanation so they can be understood well though.
One of the features not listed here is a little talked about (or perhaps just overlooked by me) feature of 4.0, covariance and contravariance.[1] One of the annoying things I've always had to deal with on collection classes is knowing that an instance implements some interface (because it is where T : ISomeInterface) but having to cast the collection in methods. 4.0 fixes that with the <out T> directive. I still haven't completely wrapped my head around the concepts but they seem pretty powerful to me.
It took me a while to understand it because it requires you to think differently about methods & types. Basically, a method can have inputs (parameters) and outputs (return values). So if you have an interface:
public void IEnumerable<T>
{
IEnumerator<T> GetEnumerator();
}
The T type is only used in outputs in all the members. Therefore it can be declared with the `out` directive (i.e. public void IEnumerable<out T>). Covariance & contravariance doesn't work with classes, only with interfaces & delegates. Consider this delegate:
public delegate void Action<in T>(T obj);
This can be declared with covariance (the in directive) because it only uses T in it's inputs. So an Action<string> can be passed into a method that takes Action<object>.
Also, note that this doesn't work with T Func<T>(T obj) because it takes T as input & returns it (output). However, it does work with TReturn Func<in T, out TReturn>(T obj) because neither T nor TReturn is taken as both input & output.
Does this make more sense? It's a hard concept to fully understand
Hidden features? Really? yield, ?? operator, @ "operator" for raw strings, nullable types, etc, are considered as "hidden" / advanced features?
I don't want to sound snobbish, but at least 90% of that list I consider as a basic C# knowledge. If you don't know how to make an enumerable with yield, or insert some debug code with DEBUG or guard resource with using(), what exactly you know?
I agree that 'hidden' isn't exactly an accurate description, but that is a pretty good list of keywords that commonly get reinvented by less experienced C#-ers. It would be more accurately described as a list of underused idioms, but the title doesn't really detract from its value.
It's unlikely you always want to handle an exception where you happen to be doing file I/O. You still want to close the file and dispose of the handle but you might want to handle any exceptions higher up in client code. Or you might want a file I/O exception to fly high and cause your program to exit.
the finally block is always executed, then the exception is thrown. So
try { dobadthing(); }
finally { cleanup(); }
calls dobadthing(), which throws an exception; then cleanup() is called in case some recovery can be done, then the exception is thrown as if there were no try block.
Something higher up can catch, if need be. this way you can write a function which is supposed to throw exceptions in cases of failure, but not leave resources hanging about. An example would be executing arbitrary SQL; you might want to do;
And the effect here is that you have a function which still throws if your SQL is bad, but doesn't have the side-effect of leaving you with an open connection dangling about. That feels like a safer function to call.
try/finally isn't something I find myself using very often at all, I think because the IDisposable pattern is actually translated into it - the code above is (nearly?) equivalent to;
so I guess IDisposable is the preferred way in MS' codebase.
(All code is typed late at night into a textbox and totally untested. ;) )
PS: @politician confirms that `using` is translated to try/finally (http://news.ycombinator.com/item?id=3440969) so that's why you don't see it much -- `using` is syntactic sugar for it.
IMO the big benefit of 'using' statements is that they have the ability to incorporate RAII-type semantics (the idea of scoped resource lifetimes) into C#, which normally doesn't have them, via objects that implement the IDisposable interface. It can make it cleaner and less error-prone to have the language perform cleanup for you automatically than relying on the user to do so himself.
AS I argued elsewhere[1], exploiting the way C# handles `foreach` is a better way of achieving RAII IMHO; it forces the client code to do the right thing rather than hoping it will remember to use the `using` statement (or manually call Dispose()).
Cost of not disposing isn't really high enough to be worth such code-obfuscating contortions.
The garbage collector makes sure objects are properly disposed of. The only real cost to letting the collector handle it is that the object ends up surviving for an extra GC cycle. It's sloppy practice, and it means memory consumption will be higher if your program rapidly generates a lot of short-lived IDisposables, but otherwise the difference will probably be trivial.
I'm not saying to avoid IDisposable (just the opposite in fact).
The `foreach` statement and the `using` statement just so happen to desugar to similar code when it comes to handling IDisposable. The C# compiler will guarantee that Dispose() is called on the iterator and that code in the `finally` block will execute... even if you exit the loop early due to an `exception` or a break/goto. Just like the `using` statement.
The benefit is that it's even more of a pit-of-success feature than the `using` statement because you can't possibly forget to call Dispose().
It's also only slightly more work for the library author and slightly less work for the client but the client is going to use the code many more times than the author will write it.
It's not an error, it's an exception. Also, the whole point of the original comment was to not catch the exception, but to have it pass down the stack, while providing safe cleanup anyway.
The stack trace is generated at the moment the exception is created.
Unnecessarily catching exceptions you don't intend to deal with is a habit that is much more dangerous to stack traces, because it's easy to miss the huge semantic distinction between
If an exception isn't caught, the caller will receive an opportunity to catch it, and so on until it hits the top of the call stack and the default handling occurs (a nasty, unhelpful error message is displayed)
DebuggerStepThrough is really useful, but only use it on code you are 100% sure is correct; otherwise, you're in for a difficult and frustrating debug session 6 months later.
Here's what it would might look like in Haskell: