This is a great course, love it and recommend any developer taking it.
Having that said, if all you want is to "switch from imperative language X to Scala", (e.g. from Java) and you want to learn first how to do the "non functional" stuff in Scala, then keep in mind that this course is teaching functional programming first, Scala later. You can do (if you want) very imperative and object oriented style of coding in Scala but this course is not focusing on that part of the language (not because it's bad, but simply because the course is about functional programming)
With that said, go and enroll, it's one of the best coursera courses I took. Great videos, and very interesting programming assignments (many are adaptations from SICP I later learned) and great forum discussions.
Of course you're right, but I actually thought the strength of the course was that it teaches you FP first, Scala later. Scala I'm still undecided about, but I loved FP.
Another complementary course is "Paradigms of Computer Programming"
https://www.edx.org/course/louvainx/louvainx-louv1-01x-parad... which is being taught by Peter von Roy. The course covers the functional, declarative, and dataflow programming paradigms.
This is indeed a great course.
The videos are well done, and Peter Van Roy's explanations are very clear. Even seasoned programmers could learn a lot from this course.
I concur, this is a really good course. I've since switched to Clojure but I still think that I gained a lot from working through this course. It is a really great introduction into functional programming paradigms, and well worth it even if you don't plan to continue using Scala.
I took both classes, but I really didn't care for Reactive as much. The first Coursera class really turned me on to the elegance of functional programming with Scala's unique type system, but much of that elegance is really lost dealing with some of the structures introduced in the second class, in my opinion.
Reactive is split into three "subclasses". The beginning part, taught again by Odersky, was a pretty useful extension of what was taught in the first class. As with the first class, the lectures were very well thought out. Although, some of the examples abandon the beauty of side-effect free programming, which was a letdown after really being turned on to that style in the first course.
The middle section on Futures and ScalaRx was pretty rough. Those lectures, done by Erik Meijer (I believe), were less clear and less well planned. I don't know if there's an impedance mismatch between Scala and the reactive style or if ScalaRx is just the wrong abstraction, but the joy of Scala was completely lost for me in these two lectures. I'm used to Javascript's Promises/Deferreds, which are essentially the same thing as Scala's Futures/Promises, but the former are far more intuitive syntactically.
I did, however, find the final three lectures on Akka to be very well taught. I had no experience on actor model programming, but I came away very intrigued by the possibilities.
A complaint that spans all three sections was that the assignments could be better focused on the the concepts at hand. Each can require a fair bit of constructing your own test and debugging frameworks to figure out how to pass the rather opaque and unhelpful automatic grading system. I eventually lost patience and quit doing the assignments after floundering with the tools.
I concur. I loved the Functional Programming Principles in Scala course and recommend it to everyone I can. However, I'm not at all sold on the Reactive Programming course.
Erik Meijer's lectures -- at least in the first iteration of the course; maybe they've gotten better -- were riddled with errors, confusing exercises, and an overall lack of coherence with the rest of the course. It pains me to say this because Erik seems like a cheerful guy and I really wanted to like his lectures, but they are a mess.
Even Roland Kuhn's lectures, which are pretty good and a lot clearer than Erik's, didn't manage to sell reactive programming to me. One glaring problem was that the actor model seems to throw away most of Scala's static type checking, which we had learned on the previous functional course. Suddenly it's ok to pass whatever message to actors in a way that seems closer to dynamic typing.
I've always found this to be a strange duality in the Scala community. On the one hand they praise strong, static type checking in general Scala programming, and on the other hand they praise an almost untyped paradigm in the actor model. And few people seem to take issue with this.
I did start that course, but after the first 2-3 videos I couldn't find the time to continue working through it. Back then I was working on a customer project that required a lot of attention. I may want to try it again the next time that it is being offered.
Compared to 2.10 the 2.11 release is nothing special: some optimizations, bunch of bug fixes and deprecations (that hopefully lead to slash and burn of little used language features in 2.12).
Curious to test out build times in 2.11, sounds like some minor gains have been made there, and more to come in the 2.11 release cycle as the new scalac optimiser is integrated (http://magarciaepfl.github.io/scala/)
That's really exciting to me. (So is slash and burn, though.) Give me another two years of compiler improvements, bug fixes, and IDE/tooling improvements!
I'm actually in a really good place IDE-wise, have a stripped down Eclipse (Platform + RCP build) with 2 Scala plugins, Scala IDE and Scala Worksheets.
SBT does the heavy lifting (have automatic build turned off in Eclipse) while Eclipse provides the dev environment.
Really snappy, zero spurious errors, it's like night and day compared to 3 years ago, woo hoo ;-)
async/await and removal of the case-class/tuple 22 field limit are the big ones I think. (edit: the limit is still in effect for Tuple apparently. Only removed for case-classes.)
In database (or Actor ask) heavy code dealing with a lot of Futures async/await has the potential to fairly significantly influence code style. for-comprehensions often don't cut it when you're dealing with a Future[Option[User]] and need to pull in their assigned roles from a Future[Seq[Role]].
val userOption = db.get(userId) flatMap {
case None => Future.successful(None)
case Some(user) =>
Future.sequence {
user.roleIds map(db.get(_))
} map { roles =>
Some(user.copy(roles = roles.flatten))
}
}
vs:
val userOption = async {
for {
user <- await(db.get(userId))
roles = await(Future.sequence(user.roleIds map(db.get(_))))
} yield user.copy(roles = roles.flatten)
}
> for-comprehensions often don't cut it when you're dealing with a Future[Option[User]] and need to pull in their assigned roles from a Future[Seq[Role]].
If you aren't familiar with Monad Transformers, yes, it can be tricky. However it's trivial using an Option monad transformer (OptionT[Future,A]]) to have the same semantics.
for {
user <- OptionT(db.get(userId)
mappedRoles = user.roleIds.map(db.get(_))
roles <- mappedRoles.sequence.liftM //goes to OptionT[Future,A]]
} yield user.copy(roles = roles.flatten)
Look at http://github.com/scalaz for already built Monad Transformers (Either/State/Option/Writer) that will work with the standard lib Future along with tons of other goodies. I actually actively dislike the async stuff as it gives you another way of doing the same thing, at a less powerful abstraction.
I've used ScalaZ a bit (don't remember exactly why, but something to do with Future transformations), but I found it to crush the compiler. Especially combined with IntelliJ.
I like the async/await stuff. Especially after attending the ScalaDays presentation on it. The idea that it produces a state machine in the background feels like it's very easy to reason about.
I actually (personally) find for-comprehensions the least useful feature of Scala probably. They rarely produce the most readable code IME with just a couple transformations in play, and it's not often I find myself dealing with compatible types in the more complex cases.
So I guess I consider async/await the readable/prettier alternative to direct mapping that for-comprehensions mostly fail to deliver on. for-comprehensions are probably Scala's second biggest wart IMO (not harmful, more just mostly useless). YMMV. Sort of like `__DATA__` or `=BEGIN/=END` in Ruby.
For comprehension is probably Scala's most powerful feature aside from higher kinded types.
You may not see the advantage of for comprehension for sequencing a few operations over Future. However, when you have a large number of calls you have to sequence along with filter (which for comprehension can do) it's indispensable.
I disagree. I can only talk about f# but for me the comprehension syntax is mostly an aesthetic choice:
[1..100] |> List.filter (fun e -> e % 2=0)
|> List.map (fun e -> e * 2)
or:
[for e in 1 .. 100 do
if e % 2=0 then
yield e * 2]
Personally i prefer the first one since it's compositional and more like a dataflow. But i am not familiar with Scala so maybe i am missing something.
I've actually gotten to the point where I think for comprehensions are a code smell. The only time I prefer the for syntax is when I have a large list of monad chains.
Large lists of monad chains almost always indicate some sort of poor factoring of the code.
I'm curious why you'd call for-comprehensions powerful though. AFAIK they're just sugar over map/flatten/filter.
IME it's almost always more succinct and more readable to just call the methods you want directly.
Plus, you can say: map over an Option and transform both cases. You could also map then getOrElse, but readability suffers if your map is multi-line IMO. In the for-comprehension version you can't transform the None case.
I use for-comprehensions with Extractors in testing, because whatever. It's a test. So:
for {
Some(user) <- db.get[User](userId)
} yield otherStuff
Is fine in that case.
Pattern Matching and Lifting are probably Scala's best features off the top of my head. Type Classes a close third.
But for-comprehensions are just sugar. They don't enable you to do anything you couldn't without them, and they actually make some flows impossible to write. I find that you can usually tame a nested mess with partial functions and a collect(). Or a fold() to handle your dual-transform.
That's just me though. Only been at the Scala job for a little over a year.
edit: @noelwelsh
I would nest yeah. But I'd see it more as a refactoring opportunity. Should authorization be in a for-comprehension? I'd instead add an AuthorizedAction in Play. That authenticated and provided a User from cache. So your example would look more like:
I think I'd have to agree with another poster that doing all that inside of a for-comprehension would look like a code-smell to me.
More than that, is map/flatten the right tool for the job for all this? Even if I wanted to do it inline, I'd probably prefer:
val perm = loginActions.mandatoryAuth(req)
val queryString = req.mandatoryParam[String](uuidParam).toClientProblem
(perm zip queryString) map {
case (Some(perm), CachedUser(user)) => actions.user(Read(perm, user))
case _ => BadRequest()
}
It's definitely subjective. I wouldn't fault anyone for using the for-comprehension (though I would encourage them to consider if it should rather be an Action), but describing it as "powerful" just doesn't sit right with me for some reason.
Plus while you'll see for-comprehensions in the wild on occasion, I think it's a stretch to call them idiomatic. Unless you were going to constrain yourself to projects with ScalaZ as a dependency I suppose.
When you're doing functional programming you represent (almost) everything as a value. Say you're working in a concurrent system, (e.g. a web app) so you're dealing with Futures everywhere. Are you going to write 4 or 5 nested flatMaps? It's unreadable. For comprehensions are much easier to parse. Here's an example from real shipping code
for {
perm <- loginActions.mandatoryAuth(req)
queryString <- req.mandatoryParam[String](uuidParam).toClientProblem.fv
user <- stringToUser(cache.user, queryString).fv
result <- actions.user(Read(perm, user))
} yield result
Then you get into nested monads (e.g. Either can represent a computation that succeeds or fails, which you want to contain inside a Future) and you use monad transformers to squish them into one single monad, to avoid nesting for comprehensions.
For me the most exciting part is that they are bringing in support for Java 8. Though lambdas don't impress anyone already using a functional language on the JVM, the introduction of Nashorn and the merger of JodaTime are pretty awesome.
Java 8 is the most exciting release since Java 2 in my opinion.
I've bumped up against that limit pretty bad while trying to deserialize json. Shapeless did a lot to solve the problem, but I'm sure glad that limit has been removed.
For that very reason, there is no generated unapply method for case classes with >22 parameters. This doesn't prevent pattern matching on the case class, because the patmat knows it is a case class and extracts the parameters directly, without calling unapply.
As the matter of fact I'm not sure now. I was looking for a confirmation that tuple limit is removed in Scala 2.11, but couldn't find it whereas statements that case class 22 parameter limit is removed are all over the place.
AbstractFunction, Function, Product and Tuple are all still limited to 22.
Increasing the limit would create too much bytecode, and in the current design, lifting the limit is just impossible (without runtime code generation or custom classloader, etc.).
So why did they choose to allow case classes to have arbitrary arity? Not that I'm complaining, I just don't understand the underworking of scala that well.
Indeed, because the fallback is HList and when I have tables up to 200 columns it's killing the CPU and crashing Scala IDE up to the point where you just can't work with it.
Go to meetups. This is fully general advice if you wish you had a job programming $LANGUAGE or $FRAMEWORK or whatever. Go to meetups for the job you want, not the job you have.
This is a tough one. I've started working with Scala full time by convincing my boss that this is the way to go. Let's just say without going into too much detail that I've proved my point, because our original stack now looks just miserable in comparison. From what I can see (at least in UK) Scala jobs are more of the high end ones and usually has to do with finances and/or "big data", which is why there aren't many of them. Also, I think Scala if not for everyone which is why it's usually used by someone who's more or less a seasoned professional mastered more than one language.
A good start would be putting your contact info in your "about" on HN. I know a few people here in SF looking for contractors who'd probably be happy to send some work your way...
I haven't seen any job boards specific to Scala, but there do seem to be more jobs popping up looking for Scala experience.
You could also work at evangelizing Scala in your current (presumably) Java shop. I've found success with this approach by getting other developers interested in the language and mentoring them, especially during the steeper parts of the learning curve. If you can pique the interest of a good portion of the development team it's often not too difficult to get a new language introduced through smaller non-critical or non-production systems, which is a good foothold with which to get the benefits visible to the wider group. YMMV of course.
Do the Coursera courses and demonstrate an interest. When we hire we look for developers with 2-3 languages under their belt. Only one has to be professional for a jnr/mid candidate. Most other Scala shops I know of work the same way.
Even this low bar eliminates 90% of candidates so it won't take much for you to shine at this level.
If you're snr then you might find yourself actually having to introduce Scala at a Java shop. Run dojos, lunchtime lightning/brown bag talks and lever it in as a testing framework or throwaway prototype. If that doesn't work, start questioning what you did to deserve being called snr.
We're hiring for Scala development near Los Angeles. (Ventura County) We don't expect you to have prior professional Scala development; a sharp dev can get up to speed and start writing decent Scala fairly quickly. (Profile has my contact info.)
Great to hear that's been fixed - I saw an issue that was a manifestation of this, and in particular it made akka's experimental typed actors entirely unusable for my case.
Is this the first mayor scala release that doesn't introduce a huge language or library change?
Transitioning to 2.11 should be smooth, aside from some library deprecations not much has changed.
To be fair, 2.10 introduced quite a few new features (my favorite being string interpolation). To name some more: implicit classes, value classes, language imports, reflection & macros,...
2.11 set the tone for the remainder of the 2.x cycle: smaller, faster, stabler. 2.12 will focus on Java 8 support and making it (even) easier to learn and use Scala.
We're also working on making the compiler a better platform for others to innovate on -- originally via compiler plugins, now using reflection & macros. A lot of cool stuff is happening outside core Scala, such as scala.js, and we hope to spur on that trend.
I really hope that you guys aren't planning to pull a Python 3 with the 3.x series. We're using Scala quite heavily in our production systems and the naysayers will have a great "I told you so" moment of we end up sitting on a ton of critical Scala code which no longer compiles in a future version.
We've been thinking about this a lot, even though Scala 3 is a couple of years out. Our current thinking is to bring the 2.x series as close to 3.0 as possible, with the remaining breaking changes being compelling enough to switch. Please share your ideas/concerns over at scala-internals!
Part of the solution will be tooling, and the team at EPFL has started prototyping a migration tool that generates patches to turn a well-typed Scala 2 program into the equivalent one on Scala 3. I believe our type system and the fact that we're a compiled language will make a big difference compared to Python.
Wow, this is the first time I've heard that Scala 3 is actually in the works (vs. Dotty as research that may incrementally find its way into Scala 2).
Naturally, tradeoffs will be made, are you guys at a point yet where you can reveal what we're going to _lose_ in terms of functionality and flexability?
I know the core Scalaz developers had a bit of an uproar on Twitter when Dotty was first revealed (due to the simplified/less powerful type system in Dotty that may make some scalaz magic very difficult to pull off).
Otherwise, improved tooling, build times, Scala 2 sans les warts, etc. will be a boon for the language.
So, Scala 3.0-M1 in 2016? Give us the inside word ;-)
I kind of hope the Scala guys do that, actually. There is too many ways to do certain things. "list.map(_+2)" is legal while "list.map((_+2)+3)" is not. Instead, you need to use an anonymous function like this: "list.map(x => (x+2)+3)". Why is the _-style even legal if it's so inflexible? Why not force functions all the time? Why have two ways to do the same simple thing?
The _ style is legal because it's really useful. Most of the time you're mapping with small functions, and the extra few characters really add up - more than enough to be worth the extra learning.
It would also be nice if "<-" were replaced with "in". It would just be more helpful and easier to remember. Special signs should be avoided at every opportunity
https://www.coursera.org/course/progfun