Hacker News new | past | comments | ask | show | jobs | submit login
C# 6: First reactions (msmvps.com)
117 points by yulaow on April 5, 2014 | hide | past | favorite | 65 comments



I'm not sure of this direction. It looks like a rather large syntax explosion to cover a bunch of corner cases. C# doesn't want to become Perl. That's my initial gut feeling but perhaps that will change.

Does anyone know what the dollar syntax is? Similarly, the new dictionary initialiser syntax? Fisnlly, what is private protected?

Edit: these are from https://roslyn.codeplex.com/wikipage?title=Language%20Featur...


Given C# Dictionary access as Dict["key"], the equivalent using dollar syntax is Dict.$key.

C# Dictionary initialization is currently something like this:

    var d = new Dictionary<string, string>
    {
      { "key", "value" },
      { "but this one", "has spaces" }
    }
The new syntaxes appear to be these:

    var d = new Dictionary<string, string>
    {
      $key = "value",
      ["but this one"] = "has spaces"
    }
(Also, with C# 6's new abilities around constructor type inference, the type parameters may now be superfluous. I'm not certain, though.)

The access modifier "private protected" appears to mean "only derived types that are also in the same assembly."


What's the value of using symbols instead of string literals to reference dictionary keys?

It replaces a 'magic string' with a 'magic variable name', which is worse than the normal solution for magic strings, which is to move them to a constant.

If I have code in one place that says:

   dict["key"] = "value";
and code elsewhere that says:

   var value = dict["key"];
I can make that code better today by adding

  const string KEY = "key"
and then using dict[KEY] in both places. I can change the key in one place, and if I misspell the constant name in either location then the compiler tells me I have a problem.

With the new syntax, I can now write

   dict.$key = "value";
and

   var value = dict.$key;
but if I go ahead and change one of these:

   dict.$newkey = "value";
I'm not going to get a compiler warning when I try to read $key (am I?). Unless there's some smart tooling support, where's the win here?

Key symbols aren't becoming a new type - I don't get to write

   var key = $key;
   var big_key = ["key with spaces"];
(so far as I'm aware - though I actually quite like the idea of a datatype that represents this kind of non-manipulable string identifier)

I guess the new .$ syntax combines with the null-propagating ?. operator, so you could have nested dictionaries and access them with:

dict?.$subdict?.$subsubdict

which looks... horrible. And suffers from the fact that there's no null-propagating equivalent index access operator, ?[], so I can only use this trick on keys that can be written as C# symbols.

I really feel like I'm completely missing what the use cases are for this capability...


I'm thinking it is the first step to a data type like Ruby's symbols. For the case of spaces I could see the underscore (_) getting used.

    var bigKey = $key_with_spaces;


But if they were adding symbol support, it would have to work for accessing actual members, not for just faking string indexed members as if they were members. Symbol support would let you do things like this:

  public double GreatestExtent {
    get {
      Symbol directionOfGreatestExtent = $X;
      if (this.Y > this.X) directionOfGreatestExtent = $Y;
      return this.#directionOfGreatestExtent;
    }
  }
(I'm making up some sort of syntax on the spur of the moment here)

Clearly this can be written easily enough without symbol support, but you could argue that the intent of the logic is clearer here than it is in some alternative approaches.

But the thing is that the kind of symbol they've given us isn't a syntax where $foo is a literal which means 'a symbol which accesses the member foo on the target', but instead a syntax where $foo means 'a symbol which accesses the indexer on the target which takes a single string as an argument and passes the string "foo" to it' - just like ["foo"] does.

If I have my co-ordinates stored in a dictionary, I don't need this symbol support to be able to write that kind of logic:

  public double GreatestExtent {
    get {
      string directionOfGreatestExtent = "X";
      if (this["Y"] > this["X"]) directionOfGreatestExtent = "Y";
      return this[directionOfGreatestExtent];
    }
  }
In fact, it isn't even possible to really make use of the new $key support in that scenario because, as I said, they're not providing any mechanism to store the $key value and reuse it; you're forced to mix string literals and these key symbols:

  public double GreatestExtent {
    get {
      string directionOfGreatestExtent = "X";
      if (this.$Y > this.$X) directionOfGreatestExtent = "Y";
      return this[directionOfGreatestExtent];
    }
  }
I don't see how being able to make that change to this code is a step forward...


Good explanation, thanks.

I don't understand the advantage of the dollar syntax though. One character shorter, and makes it look more object-y, obscuring the perf difference.


(Two characters shorter.)

At least in python, I would love to have syntax like that. (I would use :: instead of .$, but maybe :: is already used in C#.)

Maybe it's just because I work with JSON a fair amount, but I frequently have an it-might-as-well-be-an-object that happens to be stored in a dict. And in theory, this means I need to do foo['bar']['baz'], which is hard to read and unpleasant to type. In practice, I often use a class which overloads attributes as being dict keys, so I can use foo.bar.baz, but this has its own problems. foo::bar::baz would be great.


If you have a 'it might as well be an object' in C# it's trivial to change it to an object with anonymous types and then blam you've got all the advantages of statically typed objects.

I, as the next programmer, would hate you if you started trying to use C# as a dynamic language by abusing dictionaries, you're losing all the advantages of C# while still suffering the disadvantages.


My gut reaction is that I'm not sold. With HTTP requests for example, I want to say `if 'id' in params`, whereas `if hasattr(params, 'id')` feels unnatural. But I also know that I'm accessing `params['id']` at programming time, so it would be great to shorten that to `params::id`.

(And this ignores converting to JSON. The library I use is designed to be passed dicts. You can teach it to accept arbitrary objects, but that's kind of a hack.)


I can see the use case for json, but hardly for anything else. What's wrong with just using a class here? Seems very similar to how it would work in java script.


Any time you have a dict with stringy keys that you know when writing the program. HTML attribute lists, database records, http request parameters, config files.


I feel the same way. Too many corner cases, not many I deal with very often.

So far null propagation seems like the most useful feature for cleaning up cluttered code I'm used to dealing with.

Auto-property initializers are nice, but the syntax looks a bit ugly and I don't see them making that big of a difference in readability. I would rather see something that tackles the ugly-but-necessary construct I see the most: lazy-loading getters. Generic memorization of methods and properties would be even more useful. And yet even more useful would be something that could be used for both memoization and transparent caching:

  [Cache(For = 1000 /*milliseconds*/)]
  public GetSomethingFromDB(){ ... }
That would be tough to implement well, though. Unless they added Python-like attributes. Which would give us memoization, caching an a myriad other things.


You might find these two links interesting: http://stackoverflow.com/questions/3180685/how-can-i-make-me... https://bitbucket.org/Yrlec/funccache

The first one is an answer I wrote on SO and the other is a library I wrote a couple of years ago. Both are are different solutions to the caching problem you describe.


To answer this, the nice thing with Roslyn that is not explain in this blog is that it's also an API that you can extend with plugins. You project can contains them and they will have access to the AST at compile time, altering the compiled logic.

So in your example you could create one to detect your attribute and inject custom logic instead. You can unleash the full power of AOP at the compilation level.


I'll probably try that, but I'm worried about maintainability of such solutions.


Lazy<T> is quite useful for lazy-loading/memoization.


Yes, but cleaner syntax for it would be nice. Right now it looks quite messy:

  private Lazy<string> _someVariable =new Lazy<string>(SomeClass.IOnlyWantToCallYouOnce);
  public string SomeVariable {
    get { return _someVariable.Value; }
  }
Imagine the same things with a long lambda inside.


... and writing your own Lazy<T> with time- or otherwise-limited cache validity is not a problem, also.


I think it is a consequence that programming in the large is complex and we all like our little feature to reduce boilerplate, while allowing for higher abstractions.

In languages that allow for full macros, many of those features can be implemented as library instead, which also creates the problem everyone has their own little DSL.

Anyway looking to the history of computing, all mainstream languages that started simple as a movement against complexity of some sort, ended up becoming more complex release after release.


I looked at this earlier - on the one hand, I like that they're directly tackling known pain-points in the language. The declaration expression "out var x" thing is, imho, great. The TryParse syntax has always been a problem because you have to declare the variable the line before, which means no type inferencing and whatnot. Also, good for casting as they show.

Using static is great - finally we've liberated the verbs in the kingdom of nouns.

However, some of them have really ugly syntax. I know they wanted to get rid of the boilerplate problem of declaring and setting into a read-only member, but the primary constructor syntax is monstrous - declaring local variables inside of your constructor? Really? I feel like just having a "readonly set" access modifier on properties (both auto and explicit) would have been sufficient.

The new dictionary accessor seems silly.

Exception filters seem just as pontless in C# as they did in VB (nesting an "if/else throw" into a catch clause wasn't too hard).

The crazy null ?. operator I'm unsure about... that one I'll have to use. I've run into the problem enough times to see why they did it, though.


Exception filters aren't pointless. They preserve the stacktrace, meaning the debugger will break at the original throw and not the throw; in your catch (I'm on mobile so I don't have a code sample, sorry, I hope you get what I mean)


Throw in a catch block with no exception argument preserves stack-trace.

That is

catch(Exception ex) {throw;}

Rethrows without meddling with ex.


Throw in a catch block with no exception argument preserves stack-trace.

No it doesn't. If you then let that exception fall out as unhandled, the stack trace will be centered on the second throw, not on the original.


This case is confusing and a lot of people miss the subtle difference.

In one case you

try { Throws(); } catch(Exception ex) { DoHandling(); throw ex; } // BAD!

This one does lose the initial site of the first exception, and throws it as a new exception at this location, centering the stack trace on this line.

This is what you want to do:

try { Throws(); } catch { DoHandling(); throw; }

Notice you do not name, or capture the exception. This causes your handling to execute, but crucially it re-throws the original exception untouched, with the original stack trace, centering on the original exception thrown location.

So you can easily lose that information if you re-throw naively. Good code tools should yell at you for doing this most of the time. Note in some cases you want to add information, so you can throw a new exception with new information, but set the InnerException property to the original causing exception.

See http://msdn.microsoft.com/en-us/library/system.exception.inn...


This case is confusing and both you and the documentation are wrong. :)

You're confusing the Exception.StackTrace with the actual CLR stack trace. The minute you catch an exception in the CLR the accurate trace is destroyed. This is pathological for crash dumps, where it's imperative that you have an accurate trace for debugging.

If you don't believe me, go into VS and do the following.

1. Throw an exception.

2. Make sure that exception is not listed in exceptions to break on.

3. Catch and rethrow that exception using "throw;".

4. Let that exception filter out of the program unhandled.

5. Run in debugger.

Notice where your stack trace is centered on -- the "throw;" call.


Well huh...

I appreciate the further information, I did try this myself, and the difference is more clear now. The exception object had the StackTrace with the right info, but the debugger highlights the throw; line.

If the Exception.StackTrace has all the juicy details, why on earth not break there?

Yes, I was certainly thinking about the information available in the exception object itself. It's odd that it's set up this way, but I get it makes sense semantically, if other handler code has run, you need to trace things back yourself probably with a debugging step-through session, or turn on 'first chance exceptions' to get the most 'on the ground' information.

To break immediately on an exception before handlers are invoked: http://msdn.microsoft.com/en-us/library/d14azbfh.aspx

In the spirit of learning, Thanks for your contribution!


Yup, as I said, the problem is crash dumps. If this only affected debugging it would be a minor annoyance, but it destroys information in client crash dumps which get delivered by Watson, effectively making dumps useless.

Exception filters run before the stack is unrolled, so if you crash your process in an exception filter the stack is preserved.


... then fix the "throw;" call, don't bolt on a language feature to work around the fact that "throw;" mucks with the stack trace when it's not supposed to do that.


CLR semantics are frozen as far as C# is concerned.


I thought that was called try...finally


Yes, as noted, it's a three part structure:

try { MayThrowException(); } catch(MyExceptionType ex) { HandleIfException(ex); } finally { AlwaysRunCleanupRegardless(); }


No, finally is always executed. The catch handler only executes when there's an exception.


I think it depends on what you think "preserves" means. See here [1]:

    If the exception was rethrown in a method that is
    different than the method where it was originally
    thrown, the stack trace contains both the location
    in the method where the exception was originally
    thrown, and the location in the method where the
    exception was rethrown
Putting aside same method/different method, the original stack trace is preserved, but StackTrace is augmented with the location the exception was rethrown.

There's also a code analysis warning for misuses of throw that uses "preserve" in its title [2].

It says in your profile that you work on Roslyn and I imagine you're familiar with how the exception syntax works with the CLR, so maybe you know something I don't know.

[1] http://msdn.microsoft.com/en-us/library/system.exception.sta...

[2] http://msdn.microsoft.com/en-us/library/ms182363.aspx


It says in your profile that you work on Roslyn and I imagine you're familiar with how the exception syntax works with the CLR, so maybe you know something I don't know.

I do, actually. :)

See https://news.ycombinator.com/item?id=7542701.


Seems like 6.0 is all about reducing boilerplate. Those default constructors and null checking will probably save me hours of typing as the ctor that does nothing but set stuff and null checking properties all the way down are such common code patterns.


Hjelsberg is continuing the tradition from Delphi that "less is less." Golang, stay disciplined and focused my friend.


Looks good but I don't know if I'll be able to keep these fresh enough in my head to be able to use these things without looking them up. Trying to keep up with the explosion of features in the curly brace languages starts getting really hard. Moving between low level FW (C), desktop apps (C++/C#) and Android (Java) something will have to give, my code will probably continue to just look really 2005-ish.

I really like the what the monadic null checking does but aesthetically it looks pretty ugly right now.


Thanks for the feedback, Jon!

FYI, we have discussion threads on http://roslyn.codeplex.com.


This is perhaps a bit trollish but hopefully someone will get some enjoyment out of it.

http://tomasp.net/blog/2014/csharp-6-released/


Great read actually and definitely on topic - thanks for posting :)


Not as familiar with C# as I am VB.Net, but this (the before scenario) strikes me as a little odd (incorrect, or not the whole story, maybe?):

Readonly properties require less syntax.

Before

private readonly int x;

public int X { get { return x; } }

The only scenarios where I require readonly properties are where I set or otherwise calculate a private variable at runtime, and then enforce readonly to the public getter.

In the example above, it looks like the private variable is readonly, and must therefore be initialized with a value that cannot be changed. Am I reading this correctly?


Yes; they can only be written to by a constructor.


I think Declaration expressions are a mistake and proper support for tuples (and pattern matching) shold have been put in. It feels like an inbalanced expression with the result on the left and the right:

    var success = int.TryParse(s, out var x);
I'm a bit 'meh' about Primary constructors, Auto-property initializers, Getter-only auto-properties. It seems a bit messy and incomplete. Personaly I don't use properties anymore and just use readonly fields with constructor initialisation (which is enough to capture the property setting logic). What I would like to have seen in this area is 'readonly' classes, with mechanisms for cloning objects using named parameters for partial updates:

    readonly class ReadOnlyClass(public int X, public int Y, public int Z)
    {
    }
Usage:

    ReadOnlyClass obj = new ReadOnlyClass(1,2,3);
    ReadOnlyClass newObj = obj.X = 10;

Instead of the current system for creating immutable classes in C#, which becomes quite unweildy and error prone as the number of fields grows.

    class ReadOnlyClass
    {
        public readonly X;
        public readonly Y;
        public readonly Z;

        public ReadOnlyClass(int x, int y, int z)
        {
            X = x;
            Y = y;
            Z = z;
        }

        public ReadOnlyClass SetX(int x)
        {
            return new ReadOnlyClass(x, Y, Z);
        }

        public ReadOnlyClass SetY(int y)
        {
            return new ReadOnlyClass(X, y, Z);
        }

        public ReadOnlyClass SetZ(int z)
        {
            return new ReadOnlyClass(X, Y, z);
        }
    } 
Usage:

    ReadOnlyClass obj = new ReadOnlyClass(1,2,3);
    ReadOnlyClass newObj = obj.SetX(10);
I think 'using' static members will be a huge win for creating functions which appear to be part of the language. For example I use the following to do type inference on lambda expressions:

    static class lamb
    {
        public static Func<T> da<T>(Func<T> fn)
        {
            return fn;
        }
    }

    var fn = lamb.da( () => 123 );
I'v tried to make it look like part of the language by making it all lowercase, but it's still a little odd. Being able to 'using' a static class in would be perfect for this kind of stuff. To be used sparingly obviously.

Expression bodied members, yes please! So pretty...

    public double Dist => Sqrt(X * X + Y * Y);
Typecase and Guarded Cases would be lovely too. Basically anything that enables expression based coding in C#. It would be nice if the LINQ language functions were extended too: count, take, skip, tolist. It's slightly less pretty when having to wrap the expression in brackets to call .ToList();


I would love to see pattern matching and at least some kind of tuple support. I don't know if it's ever going to happen in C# though...

The `using` static is actually really good news. I have seen people who seem to be really mad about it, but sometimes it would be nice to have, for example:

    // Something like Scala's 'Predef'
    public static class Id<T>
    {
	public static Func<T, T> Identity { get {  return x => x; } }
    }
and then do:

   using Id;

   var grouped = list.GroupBy(Identity);
Considering you can already open modules in F# (which are compiled down to static classes), it seems that feature found its way to C#. So maybe there is still hope for pattern matching and tuple support :)

Equivalent of F#'s record types would be also awesome, which is what you put as 'readonly class':

    type Person = { FirstName : string; LastName : string }
    let single = { FirstName = "Jane"; LastName = "Doe" }
    let married = { single with LastName = "Smith" }


Agreed on all counts . Not sure what the C# team's desire is for another classification mechanism (record types), but it could be seen as an opportunity to fix some of the problems of the past (mutable types, nullable object references).

Perhaps C#'s record type could enforce immutability on not just the record but anything it's assigned to (maybe by using 'let'). That would allow the old mutable way to co-exist with language enforced immutability. It doesn't seem like it would clutter up or modify any existing code either, so if you don't like it, you never need to see it.

Not sure how that would work when assigning to a class's member property/field. Perhaps you opt out of immutability (of the reference) with a 'mutable' keyword:

    class AClass
    {
        public mutable ARecordType Value;
    }
Functional languages also 'extend' the behaviour of a type by defining functions that work with the type. But with an object-oriented language like C# you'd still expect the functionality to come with the type I think. So I suspect it would have to look something like this:

    type Vector
    {
        float X;
        float Y;
        float Z;

        int Length => Math.Sqrt(X*X+Y*Y+Z*Z);

        Vector Add( Vector value ) => new Vector { X+value.X,Y+value.Y,Z+value.Z };
    }
So it turns the entire class into one big constructor (with everything public by default). All non-expression fields should be provided when you instantiate the object:

    let obj = new Vector { X=1, Y=2, X=3 };
I'd be very happy with that. Maybe I'll fork the compiler and have a go... if I can just find the time.... gah


Expression bodied properties: the syntax feels slightly off for me because I can't really read the => the way I do in expression lambdas, as 'goes to' - it lacks the argument list. I can see why the syntax is as it is, there's no really good alternative, but it feels wrong.

There's no clear 'functional programming' benefit to the property syntax, either. Even though the syntax implies that Dist is logically some kind of a Func<double>, I still won't be able to write Func<double> f = Dist, because Dist is of type double.

I'm also concerned about the idea (only 'planned' for now, I believe) for the same syntax for methods which take arguments, though. That feels like it risks pushing developers into the pit of failure by encouraging them to write public methods that contain no parameter validation logic, just evaluation of a return value. It also means that the instant you want to add more complexity to a method you're forced to completely rewrite it in the conventional curly brace syntax.

Then there's the halfway world of using the syntax for a method which takes no arguments:

  public double Dist() => Sqrt(X * X + Y * Y);
where now I can write Func<double> f = Dist...


Although I 100% agree with you on the expression based properties, the argument against that would be that it doesn't follow the norms for the language as a whole. First class functions are definitely an afterthought for C#, so making this a Func<double> would go against that I reckon:

public double Dist => Sqrt(X * X + Y * Y);

> I'm also concerned about the idea (only 'planned' for now, I believe) for the same syntax for methods which take arguments, though.

Where's this? I must have missed it. Or could you show what it would look like?


Well note that the feature is described as "expression-bodied members", not "expression-bodied readonly properties". In Mads's "Future of C#" presentation where most of these features were first floated, he described using the syntax for declaring methods as well as properties:

   public Point Move(int dx, int dy) => new Point(X + dx, Y + dy);
Which is logically consistent with the property syntax, but as I say, I have my reservations about it.


> Personaly I don't use properties anymore and just use readonly fields with constructor initialisation (which is enough to capture the property setting logic)

Yes it's enough if the property setting logic is only executed once during the lifetime of the object. So, it only works if they're supposed to be immutable.

Plus, public fields cannot be used for data binding.

Plus, you can't debug nor log whenever a field value is read.

Also, you can end up having to convert it into a property, and this involves various compatibility risks.

Meanwhile, there is no plausible scenario where you realize you must convert a property into a field.

What do you gain instead, with your design choice?


> Yes it's enough if the property setting logic is only executed once during the lifetime of the object. So, it only works if they're supposed to be immutable.

They are. Dig out some of the talks by Rich Hickey on immutability, he tends to put it pretty concisely, better than I ever could. But primarily it's about capturing snapshots of your world at a particular point in time so that you have a consistent view of the world. So all objects should be immutable.

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...

> Plus, public fields cannot be used for data binding.

JsonConvert.SerializeObject()

JsonConvert.DeserializeObject()

That's all I need, or a .Select(). I don't use WinForms or any of the standard ASP.NET databinding stuff, it's ugly as sin. Even if I did, isn't there a LinqDataSource or something these days? Not sure if that would do the job, but I'd assume it would work with the results of a LINQ expression? Anyway, I don't really need that, and being able to reason about my code is more important.

> Plus, you can't debug nor log whenever a field value is read.

Never caused a problem, there's always the code that reads from it. Stick your breakpoint there. Also when you use immutable structures throughout, your bugs tend to not be in the 'values' (i.e. an immutable structure).

I tend to be breakpointing in the ctor if anything to check the setter logic or input values. Once the object is set, that's it. No need to keep checking.

> Also, you can end up having to convert it into a property, and this involves various compatibility risks.

Fields an properties are incompatible? How?

> What do you gain instead, with your design choice?

* I can pass an object to another method and know it's not going to change it without my permission.

* I don't have to analyse the inner workings of a class to find out how the state is going to be changed when I call a method on it.

* I can pass the same object to two threads running in parallel knowing that there won't be problems with the two threads writing to the same object..

* I can easily write transactional code and 'rollback' when it fails.

* I can write pure functions which makes it easier to test my code.

It's much easier to reason about a data structure if you know they won't change basically. There's huge advantages to that. Personally I find it makes my code more reliable and easier to compose and manage long term.

I agree it's not all sweetness and light using immutability with C#. There's more boilerplate than necessary and there's very little in the language to help the process.

Personally I think it's worth it, YMMV.


> So all objects should be immutable.

YMMV as you say, but I think that your needs are very specific if that's your base principle. Objects in C# are mutable by design, so you're kind of working against the language here. It sounds like your approach is closer to functional programming (but again, I don't know the context)

> I don't use WinForms or any of the standard ASP.NET databinding stuff

Neither WPF, Siverlight... long story short, hardly any popular .NET framework (nothing wrong about that, it just goes to show that your case is not mainstream)

> Fields an properties are incompatible? How?

See http://csharpindepth.com/articles/chapter8/propertiesmatter.... - "Compatibility issues".

> Never caused a problem, there's always the code that reads from it. Stick your breakpoint there

As long as it's only one or two places and as many breakpoints, sure :) life's beatiful then

> I can pass an object to another method and know it's not going to change it without my permission.

Achievable with properties (with private setter), so I wouldn't count it as a benefit.

I see where you're coming from, but to me you are pointing out the advantages of immutability, not choosing fields over properties. I think you are not wrong in preferring immutability, of course - I know it has its perks - but in assuming that one is synonymous with the other. Whereas you can implement immutability with properties just as well (and you can still use a readonly field as the backing field).


> YMMV as you say, but I think that your needs are very specific if that's your base principle.

Not really. This is for a very large web-based medical practice-management system and tertiary services. It mostly sits on top of stripped-back ASP.NET, but yes it's a framework I developed myself because 9 years ago when this project started ASP.NET was awful.

So yeah, whilst it's non-standard, it's not rocket science, or some bizarre parallel C# universe. I still need to get data from the server to the client in an efficient way, I still have controllers, server pages, database CRUD, etc.

> Objects in C# are mutable by design

Objects are mutable by default, but there also exists a keyword called readonly which has been there since v1.0 of C#, so I would argue that it's just as much of a language feature. It not being the default doesn't make it any less of a valid approach. One of the most used library types is immutable: String.

> It sounds like your approach is closer to functional programming (but again, I don't know the context)

I try to follow that route, yes. Clearly it's not always possible because of the limitations of the language or the .NET framework, and that's fine, I'm not going to be dogmatic over it. I think however that immutability where possible helps code quality overall.

> Fields an properties are incompatible? How? > See http://csharpindepth.com/articles/chapter8/propertiesmatter..... - "Compatibility issues".

That's very interesting, but none of those are really issues for me, and I'd argue they'd be minor issues for most people too:

> You lose binary compatibility

We always compile from source

> You can use a field for ref parameters, whereas you can't (in C# at least) use a property in the same way.

Ref fields are nasty, I wouldn't want to see anyone on my team using them. obviously passing a readonly field to a ref is pretty pointless. So non-issue for me.

> Involving a mutable structs

Don't use mutable structs.

> You lose reflection compatibility

Call GetFields instead of GetProperties. Non-issue internally. I guess if a 3rd party lib didn't support fields then it might be a problem, so far it hasn't been.

> Achievable with properties (with private setter), so I wouldn't count it as a benefit.

No it isn't, because the method you called could call a method on the object you passed, affecting the underlying state of the object. If you work on small code-bases and you understand the implications of every function you call then great, but the added peace of mind knowing that this structure you passed to a method isn't going to be changed (now or by a programmer in the future) is very powerful.

It's also makes method signatures tell you explicitly the behaviour of the code within, which makes it easier to look at the surface of a library to know what it does, i.e.:

    MutableState state = new MutableState(...);
    MutableLib.DoSomething(state);
    // ... any code run after DoSomething is now flying-blind, has state changed?

    public class MutableLib
    {
        public static void DoSomething(MutableState state)
        {
            // What happens here?  Does MutableState get changed?  How can you know when the 
            // method returns 'void' and takes a mutable object?  You have to look at the code
            // to know for sure.
        }
    }
Then the immutable version. Notice how the signature to DoSomething is explicit in its purpose.

    ImmutableState state = new ImmutableState(...);
    var newState = ImmutableLib.DoSomething(state);
    // ... any code run after DoSomething is entirely aware that state has changed

    public static class ImmutableLib
    {
        public static ImmutableState DoSomething(ImmutableState state)
        {
            // What happens here?  Well it doesn't matter to the caller, they know
            // they get a new ImmutableState back, and if they chose to ignore it
            // then the original state object they passed in will be unaltered.
        }
    }
It also allows for free transactional behaviour with rollbacks, which can be super useful:

    ImmutableState state = new ImmutableState(...);
    try
    {  	
    	var newState = ImmutableLib.DoSomething(state);
    	newState = ImmutableLib.DoSomethingElse(newState);
    	newState = ImmutableLib.DoAnotherThing(newState);

    	// Commit
    	state = newState;
    }
    catch
    {
        // If an exception fires then state won't have changed. 
    }

> I see where you're coming from, but to me you are pointing out the advantages of immutability, not choosing fields over properties.

You're right that I'm primarily arguing for immutability, but I would also argue that properties can't achieve immutability because the object can always change itself. A private property can always be changed by a member function. Clearly you can make classes that are immutable for all intents and purposes by using properties; I just prefer the explicit nature of 'readonly' - it's a language feature to be used in my humble opinion.

There's also the argument that properties are an extra unnecessary abstraction for this case. With a readonly field/property it is only set once, so having a layer of getter/setter logic is unnecessary. You only need that logic once at construction.


> What happens here? Well it doesn't matter to the caller, they know they get a new ImmutableState back, and if they chose to ignore it then the original state object they passed in will be unaltered.

Yes if they choose, but how are they supposed to make this choice? :)

You can't make an informed decision about it UNLESS you know what is the difference between ImmutableState instance returned by DoSomething and the original one.

We're back to the same problem, then. We can't be sure what DoSomething does without peeking into its implementation.

That it produces another instance in process may be nice, but the core problem is the same.

Can I know for sure how the state of my MutableState object changed?

Can I know for sure what is the difference between the ImmutableState I passed to you and the other ImmutableState you made me pull out of your blackbox? I'm flying blind just the same

> It also allows for free transactional behaviour with rollbacks, which can be super useful:

Agreed, but there is more than one right way to achieve it. This could be also done with mutable objects and deep copies. Just make yourself a backup clone before you go wild.

The cost of your approach is that all behaviour and logic is moved into some ImmutableLib class, so you're creating various "Libs", "Managers", "Services" and the like, which means that you're essentially writing procedural code.

This OOP anti-pattern is known as anemic domain model.

http://www.martinfowler.com/bliki/AnemicDomainModel.html

Thanks for the link to Rich Kickey's presentation, it seems to be some food for thought. I'll watch it when I have some free time on my hands and maybe it will affect the way I see it.


I've actually switched to using a Reply<T> for anything which models the TrySomething pattern.

So it looks like:

  var possibleInt = TryParse(s);
  if(!possibleInt.Success)
  {
    //Fail out early
  }
  var actualInt= possibleInt.Data;
(Our Reply objects also allow for messages to be passed up, so that various layers can report details of the error they bumped into, and the presentation layer can then display them.)


Looks like you're part of your way to implementing the Error or Either monad.


Is there a list anywhere of all the new functionality?



How tragic it is that VB gets type cases, guard cases and exception filters well before C#.

I guess that's why F# is starting to hit the prime time.


Some of these seem undocumented anywhere that I could find.

I posted an SO question about some of them, if anyone has any input on the matter: http://stackoverflow.com/questions/22881465/what-do-these-ne...


I've written a little bit more about them using the examples from CodePlex on my blog ( http://blog.filipekberg.se/2014/04/04/microsoft-open-sources... ).

There were also some interesting discussions on the C# sub-reddit: http://www.reddit.com/r/csharp/comments/225g4b/microsoft_ope...

Hope that helps and answers some of the questions you have!


Lot of good stuff in there. I don't use C# day-to-day anymore, but I always wanted string interpolation.


http://damieng.com/blog/2013/12/09/probable-c-6-0-features-i...

... so, in short, nothing definite yet, but the list above should be more or less accurate.


Well it is nice that C# 6 becomes cross platform, can someone please explain to me the license behind it?

Microsoft usually does the license thing that takes away some of your rights. Is it the Apache2 license or something else? What are the limitations on this project?

Thanks in advance.


The new compiler is Apache 2.


What ever new feature C# includes, people will still hate it, whatsoever.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: