- no NULL- just use option types and pattern matching
- how powerful pattern matching is
- pipelining
- units of measurement- we all know the billion dollar disasters caused by distributed teams all thinking in their respective units- these should always be in the signature of a method
but when starting I can still remember how I struggled with the rather mathematical notation of int -> int -> int for signatures- I often failed to quickly spot the error in calling a function.
brace placement just felt wrong (when nesting function calls, which could be necessary despite pipelining)- I understand why, but for the most part it just felt wrong coming from Java/C#.
unfortunately it is a rather niche language- being a .net language does not help here as the main features such as algebraic datatypes are largely incompatible with C# syntax.
I am just glad that more and more features are adopted by C#- only downside is, that C# suffers of feature bloat :(
F# selling point to me is type providers. Being able to write a SQL query and the IDE/compiler knowing the result types is magic. Screw ORM this is how it's supposed to work.
I have to admit I only used F# on a small project a few years back, but the type providers made the job super simple - the job was basically generating some fancy reports from multiple data sources and being able to just specify database connection strings and have everything work in the IDE/compiler was a huge win. Setting up CI was a bit of a PITA because I remember needing to spinup a DB to create schema, there was some option to do it from schema directly but didn't want to waste time.
what I was missing: "dotting into" in the sense that I have a object/data I want no manipulate and quickly see what can be done with an data in FP
after getting into FP some years ago I have significantly altered my programming style in C#, using much more extension methods for pure functions, and recently NonNullable, pattern matching etc...
- computation expressions [0], which can be used to easily create a DSL (domain specific language) so you can define workflows and keywords to abstract big business logic
- currying [1] and partial application, which lets you call functions with incomplete parameters and pipe along the new function. It's like a much more flexible version of Func<t,..> in a C# method parameter.
Once you get comfortable with these, you'll find yourself using them all over.
If you have any specific questions I can probably answer them, one of the projects I lead at work has ~500kloc of F# which interops with C# and unmanaged languages. The largest value-add for us is probably the units of measure though, since the domain involves a lot of physics.
I think that's one of the difficulty which is hard to understand for people used to it which is why it might be hard to find material explaining it to you as you would like.
As someone who was taught programming with Ocaml as my first real language, I find currying intuitive (the rightmost type is the result, everything else is an argument) but sometimes I have difficulty with functions taking other functions as arguments and which are suffisently complex for their comportement to not be obvious from their type signature. But that's inherent complexity not really a problem with currying.
I don't know, I think once you're passing in results from functions rather than functions themselves to curry, the complexity is not centralised in the same way, but broken down.
It's a bit like occasionally needing to create intermediate variables to grok precisely what is being made at a specific point in highly chained code (I do this occasionally in LINQ too). My brain is only small and cannot keep track of things clearly
I personally found F# to be a decent alternative to Elm for small browser apps with Fable, Elmish/Feliz, Feliz.Plotly etc.
You can generate typed interfaces for any libs you're missing which have a typescript interface using ts2fable, that's the kind of pragmatic tooling you need if you're veering off the well-trodden path in commercial development. Ionide in VSCode is excellent (though not perfect) and the compilation times are decent.
In terms of community (which is not unimportant when it's that small) even the compiler gitter folks were super helpful when I had some reflection questions. Overall I'd use it again for small browser apps, for backend I guess it depends on your company and domain.
while I love F# too I think what hinders adoption is:
- training in FP: for an enterprise on the most crucial points is imho recruiting
- tooling: while the tooling is ok for F# (though not ReSharper grade by far!) it is still lacking
what I often see as refactoring is not that supported in F#/FP languages is the functional equivalent to spaghetti code, where function call after function call is piped without following SOLID principles/ Clean Code style.
I don't think you can apply the same principles to OO and FP but in terms of refactoring, my experience is that pure functional code is a joy to test and maintain. Small pure functions (again functions can only be pure functions, otherwise you're really talking about procedures), are by their nature easy to test, compose, cache, parallelise and pipeline. If that sounds bad to you, then fair enough.
A good programmer can write FORTRAN code in any language.
Much as I loved LISP, back in the day, I hated working on other people's code. The author's prior (preferred) programming language was so obvious. And advocating consistent idiomatic LISPy house conventions never proved worthwhile.
Ditto pretty much every language since. So much so, I've come to reject metaprogramming and reflection for team projects. Any more, JavaScript (gods save us) is a language dialect construction kit, with every person and cohort creating their unicorn opinionated mutant variant. As much as C++ ever was.
In contrast: I do like like Lua. But I almost prefer using it. It's one virtue is simplicity. It has less potential for individual artistic expression. When maintaining someone else's Lua goo, I spent much less time bending my mind to the author's mental frame.
Perhaps we should revert to BASIC, Pascal, or Java 1.0.
Perhaps future parsers will permit subsetting grammars, to better define and enforce house rules.
Perhaps we'll all revert to just banging the rocks together. Ook, ook.
I've seen a lot of praise for F# but trying it out left me somewhat disapointed. Overall, it was nice to program in but there was a constant feeling of awkwardness which, it seems, was caused by F#'s ties to the rest of the .NET world. I don't remember the details too well but some of the examples are:
- NULL values still are a problem because they can be introduced any time you interact with the .NET framework. I got hit by this almost right at the beginning when first using the language – i defined some data type with non-optional fields and used it as an argument to an ASP.NET Core controller method only to realize that my non-optional fields don't enforce anything and ASP.NET will happily set them to NULL when data aren't present.
- There are multiple slightly different ways to define data types – classes, records, discriminated unions – and it's not immediately clear what are the consequences of using which. I remember having to switch some of my datatype definitions to classes because otherwise i couldn't make ASP.NET Core automatically deserialize them.
- You still have to learn in detail how all of the C# and .NET concepts map into F# to work with .NET and other libraries, e.g., there is a weird duplication of collection types, sometimes you have to do a weird casting dance with the :> and :?> operators when working with interfaces, dealing with out parameters also took me quite some time to figure out.
- Namespaces, classes and modules also seem to have some confusing redundancy.
I've tried out F# only once for a single project so a lot my complaints are probably due to my inexperience with the language, but all in all it seems that a lot of the suffering is inflicted by the association with .NET and the language could have been a lot nicer if it wasn't tied to it (I'm not dismissing the benefits of it, though).
Some of the issues (such as null values sneaking through the standard library) are a price to be paid for using a runtime such as .NET with a large ecosystem of existing libraries. This can be avoided to some degree by writing/using thin wrappers around the troublesome integration points that translate nullable values to option types. I agree, it is a source of friction. I believe other functional languages that target existing platforms (such as Scala) have similar issues.
Other points you mention (like being forced to upcast values to interfaces) have been resolved in recent versions of F#.[1] The language continues to evolve and improve with annual major releases. My biggest pain point is actually the tooling around the language, but that situation has been improving as well.
The bad of F# is trying to find work in F#. I have the freedom to choose F# at work but it often seems like the wrong choice. If I want a web app I'm pulling in so many extra dependencies.
The upside is you can use all the net core libraries.
The downside is then you have to know how to write wrappers/helpers for them and probably don't get to use all the cool stuff that comes with F# because you're using a library written for C#.
It is by and far my favorite language, but i'm also super lucky i have, relatively, low code demands (basic ETL/CRUD stuff) and the ability to choose what I use in my environment.
I hope to god it gets more traction because I think it's amazing, but being second child to C# hurts it in a lot of small ways. Especially because "well we don't need that library, just use the netcore one" is such a common reaction in the community, but it's really not ideal.
> The downside is then you have to know how to write wrappers/helpers for them and probably don't get to use all the cool stuff that comes with F# because you're using a library written for C#.
For any long term project you should own your abstraction anyway, i.e. a wrapper/interface. Not only do you need it to properly test it, you never know when that 1 hour spend creating your abstraction, gives you back in ease when an external dependency inevitably needs to be replaced.
I think there's a fundamental mistake in architecting around pluggable dependencies. I've never worked in a codebase that was improved by treating a dependency as a thing that could be replaced at any time, in fact it only made me question why dependencies were expected to change so often. Maybe you want to spend more time prototyping if even the basic foundation of your software is so volatile.
You know? It's not the same as keeping your data layer and HTTP layer separate, or your local state separate from your rendering layer, and encapsulating business logic. Nor is it the same as building a modular system designed around your business's domains, so you can change or replace those as the needs of the business shift over time.
For example, we encapsulate our email sending service (Sendgrid). Sendgrid has a pretty good client library, so other than providing a slightly more idiomatic F# API, our wrapper layer doesn't do much.
However, (1) sending email is a very straightforward service, so if Sendgrid ever becomes expensive, unreliable, or otherwise limited, replacing it with a different service is quite realistic. (As opposed to e.g. a database or caching layer where you would probably have to also review all the _logic_ if you switch to a different engine). And (2), we don't want to send real emails during tests (especially e.g. fuzzy or property which run en masse), so you actually want a honest-to-god mock here.
It's not so much that you expect them to change, but when developing with unittests it's a necessity. And it makes it possible to have the right degree of in memory component and integration testing. It forces you to think about the interface with that dependency, and ask the right questions instead of potentially using some other domains abstraction a lot. It's not different than thinking in modular design.
The added benefit is that when, not if, you have to replace some dependency it's simple.
PHP and JS in the largest project. C# in small GUIs. The gui builder saves so much time and I think they could find any reasonable developer to replace me. If I use too much F# they would have a hard time finding someone.
I also use a lot of SQL I think it's fantastic for data manipulation. There are some legacy Java apps as well.
I don't mind working with most languages (the only exception is Perl, I'm not a fan). I just love working with F#, so much less boilerplate and type safety. It's so much more fun.
But I want everything I do to last that means reducing the number of dependencies and build tools I introduce. Fable is amazing but a SAFE stack app will bring in a huge number of extra dependencies and I'm not sure what maintenance would be like in 10 years. As it is I'm working with a stack of docker containers that compile an old version of PHP and drag in legacy libraries.
Many of the points here don't necessarily have to do with the language, well at least to me and could easily have occured in other languages such as coding style. Sure - the language uses some sensible defaults and is somewhat pragmatic when to be pure and when to not. But in the end programming can get complex. Even if the language steers you towards practices that lower complexity (a productivity win) you still need to understand what is going on and use the right tool when appropriate.
On a side note the performance thing seems to be a myth to me. There's no reason C# or F# should be faster or slower than one another with enough effort given the same underlying runtime. Many of the functional features are moving to be lower/zero cost in F# (inline lambdas, state machines, etc). They all compile down to IL in the end so as long as you can get the IL you want does it matter? Sure it may be easier to express more performant code in one or another depending on the domain. I could argue many cases where F# was easier to write performant code (numeric, generic algo's, explicit inline functions, inline vs ref, tail rec, compile time polymorphism, avoiding virtual dispatch preferring static functions, etc) and vice versa (native interop, until recently specialised async/await, etc).
> On a side note the performance thing seems to be a myth to me.
I've always thought about the performance advantages as more related to how (FP) allows for easy parallelisation as your functions are pure, no mutability, etc .
Yes F# could be faster than C# but the compiler is missing some advanced optimisations. Right now you can do those by hand but often the code will read like C# once you are done.
I wonder if a FP compiler should actuall be able to produce faster code than for imperative style: imho knowing intention and context could lead do far superior optimizations (for loop with var for summing vs. List.sum)
Yes it definitely can! The most important optimisation (imo) is to replace pure code on immutable data-structures with imperative code on mutable data-structures.
Why not just write imperative code? Because it’s really hard to reason about.
Maybe this is a problem of our tooling or terminology not being good enough, but I don't agree with the idea that code that passes the available acceptance tests at a given point in time is "right". A codebase is a living thing that needs to be maintained, so it's not enough for the code to be "accidentally right"; it needs to do the right thing for the right reasons such that it will still do the right thing as it evolves.
I'm mostly a C# programmer by trade, and having tried F# multiple times (last time by doing half of the AoC in F#) , I haven't felt that I'd be ready to make the jump.
My main issues were:
- Much of the functional stuff like map/filter is available in C# as Linq, which granted, is not as nice, but the difference is not huge.
- Much of the new stuff in C# like ValueTuples does make it into F# but with a godawful syntax, I don't think this is the language's fault, but rather is the result of Microsoft's inattention
- I'm still not sold on the idea of functional languages - much of the stuff in them, like map/filter, option chaining, high quality type inference, pattern matching has shown up in procedural languages (they are all available in C# with the exception of option chaining), and the downsides, like immutability making some algorithms impossible to implement, is great. Imo, local mutation is not that bad, and functional languages actively prevent it, while global mutation is still a thing, which tends to be more problematic
- The generated .NET IL is not great. seq turns into an IEnumerable, list is some sort of linked list, I'm pretty sure you'd get pretty terrible performance if you did this.
C# type inference is not really comparable in any way to F#'s ML-style global type inference. You can realistically write a non-trivial F# program without a single type annotation. C#'s type inference is local: the type of a variable solely depends on the type of the expression on the right. F# uses global type inference, where all expressions of a program are used to infer the types of variables. This means that you can e.g create a list with just [], and then latter append a string to it, which the compiler uses to infer the type of the list.
Both approaches have their upsides and downsides (a program without a single type annotation is really hard to read without the help of an IDE), but they are clearly very different.
>high quality type inference, [...] available in C#
This is not what I would say at all, the only thing that C# offers in type inference is the ability to use "var" when initializing a variable to save yourself the tedium of writing an obvious type, I have heard something about generics too but I can't remember what it does exactly.
This doesn't hold a candle to F#'s ML-like inference : omitted function argument types and return type, automatic generalization, the ability of the compiler to tolerate individual expressions not being obviously typed right away as long as function-level-context eventually makes it clear what all things are. C#'s type inference is pitiful and trivial by comparison.
If I remember correctly, in F#, all methods are generic unless you say otherwise, while in C#, you have to make it as such specifically. Also there's things, like you can't do generic operator overloading, but in practice C#'s type inference is good enough that you don't need to manually insert the types most of the time.
You can write stuff like
var totalLength = Strings.Select(x=>x.Length).Sum();
in practice, where everything is generic, yet you don't need to specify the types explicitly.
Having to specify function argument types, and return is a design choice - I think it makes the code more readable than having to figure it out on your own, but I see the appeal of the other approach as well.
To summarize, in some ways F# is better than C#, however for most of these things, the difference is not large enough to matter. I realize that this is not a fair comparison as far more work and attention was spent on C#, but still it is what it is.
> Imo, local mutation is not that bad, and functional languages actively prevent it, while global mutation is still a thing, which tends to be more problematic
F# also allows and encourages local mutation. In fact, it even allows global mutation without any issues.
Different data structures have different tradeoffs that are used for different contexts. Usually arrays are faster than a linked list however I've seen arrays been used pretty badly as well with GC pressure and LOH pauses hence all the ArrayPool, MemoryPool, stuff added recently which mitigates it a little except when you need to hold it for a little longer than the current method/call. When a simple linked list would of GC'ed much easier than a contiguous block of memory (i.e. GC speed/stability more important than the slight gain in iteration speed, and a lot safer especially in high allocation paths). F# offers extra collections based on some patterns you may want to do there (e.g. recursive loops are easier with linked lists than arrays for example) but doesn't set out to provide its own mutable collections given .NET provides those already.
Type inference isn't really in C# other than some simple parts, nor are some of the features that sometimes give F# an advantage with performance that I've used. When you get used to it I find I have to transpile some problems to C# how I would write them in F# with duplication to get the same performance depending on the problem.
If you are used to C# that's fine. Using both I still think F# is usually the better language these days if your coming into it from scratch, although if you aren't maybe the "sunk cost" isn't worth it?
> The generated .NET IL is not great. seq turns into an IEnumerable, list is some sort of linked list, I'm pretty sure you'd get pretty terrible performance if you did this.
Functional languages have completely different data structures. What is intuitively fast in C# (arrays) may be terribly slow in FP. For example, Erlang (and hence Elixir) are far slower than C# in single-threaded tasks. But, immutability of data allow for incredible parallelism. F# allows mutability, but the default is immutable data.
I remember quite some time ago, when ASP.NET could handle a few thousand connections, Elixir team achieved 2 million concurrent connections with just a few tweaks to the Elixir codebase.
> the downsides, like immutability making some algorithms impossible to implement
F# like Ocaml has mutability. You can use references for local mutability and if it makes your code easier to write or more clear no one will bat an eye. Immutability seems to me as mostly an Haskell obsession. ML languages have always been very pragmatic on this matter.
> I haven't felt that I'd be ready to make the jump. My main issues were:
- Much of the functional stuff like map/filter is available in C# as Linq, which granted, is not as nice, but the difference is not huge.
That sounds a lot like "I don't see a need for functional languages because I'm already using them as libraries" ;-P
> Imo, local mutation is not that bad, and functional languages actively prevent it, while global mutation is still a thing, which tends to be more problematic
Many functional languages allow for local mutation as "cell" objects, if you want to abandon the purely functional paradigm for some encapsulated parts.
And you can do global mutation in a functional way, you just need to handle it with functional state-change idioms that maintain the referential transparency of the code (e.g. the State monad).
> I like F# because it's the best programming training language I've found.... It still works great for making me think about and become a better programmer, no matter what language I use in everyday work.
I can understand how trying a range of language can make you a better programmer, by better appreciating the dimensions of language design and how it fits together, etc. but I have harder time accepting this stronger, more specific claim. There's not really any support for this claim in the article.
Of all the languages I've used nothing is quite as fun as refactoring an F# app down to nothing. Expressive power is king in F# with the whole .NET ecosystem behind it.
Ah yes- superb content on FP/F#!
He helped me a long way to understand FP- thanks for mentioning it, I can definitely recommend it to anyone who wants to get into FP / F#!
I feel this way about Haskell, and I understand why someone would feel this way about F#. No matter what language I use, I now constantly think about how to avoid side effects, how to minimize mutability, how to ensure my code flow is readable and not cluttered by irrelevancies (notice how the author keeps bringing up "noise"?). It's similar to the way that learning TDD led me to think about testability in every design regardless of language.
That's not to say that pure functional languages is the only way to these lessons, of course.
>I now constantly think about how to avoid side effects, how to minimize mutability, how to ensure my code flow is readable and not cluttered by irrelevancies (notice how the author keeps bringing up "noise"?)
Sure, but you would get that from pretty much any FP language. This article seems to be making claims for F# in particular.
C# is much further towards readability on the readability/conciseness scale, and I don't expect that to change. Sure, it gets many similar features over time, but compare e.g. pattern matching and how much clunkier it is. Or pipelining.
The good, it is a ML language with the goodies from the .NET ecosystem.
One of the languages I enjoy coding every now and then.
The bad, it appears .NET team never really knows how to "sell" F#, and anything related to it comes as an afterthough if at all.
So on MS shops that actually make use of it, it gets relegated to writing libraries, or in a parallel universe with wrappers or competing stacks for anything .NET.
F# is practically dead. Worked with a lot of projects that jumped that hype wagon. Do not know of any company that use F# as of today. From a holistic viewpoint F# is complete garbage comparing to C#. Could we have a working pure attribute in C# though.
We've built a high-performance in-memory data transformation engine in F# and it works like a charm in thousands of installations around the world every day and supports business-critical processes in many organizations.
> which aren't pet projects and the empty set are equivalent
Wow. So much hubris in that sentence. I used to work with a startup in Sydney that still regularly uses F# for middleware/backend work. When I was there for 1.5 years, I saw the # of users of our product went up 3x but with very little delta in defects, rework etc. F# wasn't the only reason for this of course but it helped.
I've worked for two different finance companies that are entirely on F#. One has huge customer bases and sells the product as on prem hosted solutions, the other hosts internally.
I have been consulting at a world-renown hedge fund that has a lot of F# including nightly processes that have apparently been running for years with zero maintenance. You'll never hear them talk about it or see them at meet-ups though. The CTO is a bit of a maverick -- I'm not suggesting that this type of quiet adoption is common.
Sadly in my experience it is common for F# given its finance background at least historically, particularly in London/Europe. There is a culture of not showing your cards in these environments having worked in a few of them, and quietly being a success. Its an "arms race" with your competitors after all - only people in the industry jumping between companies have an idea what their other competitors are using often sometimes with "non-compete" clauses. Meetup's/conferences/articles showing your work are often explicitly banned or at best not encouraged by the companies for commercial reasons.
- no NULL- just use option types and pattern matching
- how powerful pattern matching is
- pipelining
- units of measurement- we all know the billion dollar disasters caused by distributed teams all thinking in their respective units- these should always be in the signature of a method
but when starting I can still remember how I struggled with the rather mathematical notation of int -> int -> int for signatures- I often failed to quickly spot the error in calling a function.
brace placement just felt wrong (when nesting function calls, which could be necessary despite pipelining)- I understand why, but for the most part it just felt wrong coming from Java/C#.
unfortunately it is a rather niche language- being a .net language does not help here as the main features such as algebraic datatypes are largely incompatible with C# syntax.
I am just glad that more and more features are adopted by C#- only downside is, that C# suffers of feature bloat :(