Awesome -- I'm super glad to see there's going to be some leadership by the folks from the "Scala, a language for making nice software" contingent to balance out the echo chamber of the "Scala, where if you don't have a graduate degree you're going to feel like a moron" contingent.
Edit: To clarify what I mean -- http://underscore.io/blog/posts/2015/06/10/an-introduction-t... exemplifies, to me, what is wrong in Scala-land. If Abstract Algebra is prerequisite knowledge for tools I 'really should be using', something has gone horribly wrong.
That's... actually simple stuff. Have you had a look at Scalaz? Or Shapeless? The whole point of Cats is to solve the overkill of Scalaz, where you sometimes do need algebra, by making abstractions simpler and safer to use. Remember, nobody is forcing you to use this wizardry in Scala. You can write it in a simple, matter-of-fact better Java style; or you can leverage the power of abstractions to create something advanced. You don't have to indulge in applicative functors and coproducts just because. Use them wisely, if at all!
If you have the prerequisite knowledge. Most people don't. And then somehow it always manages to lead to codebases that look like `a \~=>>> b`.
nobody is forcing you to use this wizardry in Scala
Except that sorting out things in open source code absolutely does entail dealing with this. With JS, Python, or Java, I can dive into just about any open-source codebase, and at least get some clue as to what's going on. I think a large part of why Scala's open-source situation is pretty awful is that so many Scala codebases are a bunch of dudes waving their [censored]s around, using libraries that consist of inscrutable squiggles and meaningless terms, just to demonstrate how smart they are.
You can write it in a simple, matter-of-fact better Java style; or you can leverage the power of abstractions to create something advanced.
This is a false dichotomy. When I look at code written by Martin Odersky, I don't see people trying to brag about their grad degrees through their code. And the reason this project makes me so happy is that it really feels like "the good people" of Scala reclaiming the community from the LambdaBros.
If it's something a sophomore with no background beyond calculus can handle, it is simple stuff. It's ridiculous that treating software engineers as though they have a bachelor's degree in STEM makes you a "LambdaBro."
It's disingenuous to say that this is included in a "bachelor's degree in STEM" -- it's part of a specific course of study.
Part of the reason this issue pisses me off so much is that I have a degree from one of the top CS programs in the world. I was definitely more of a theory than a systems person. And yet it still feels like a pile of unnecessary ego-stroking gibberish.
I feel comfortable saying most people who make software are less-experienced in this area than I am. When I run into this, I can at least say: "This is bullshit", and not feel the imposter syndrome of "It's just me -- I'm not smart enough for this" that I imagine most people feel.
Beyond that, functional programming is, by and large, easier than imperative programming. I love writing Scala -- It's honest about side effects, honest about concurrency, and has one of the better type checkers around. And I'm sick of seeing people try to "fence off the riffraff" by raising the barrier of entry to the Scala community.
Beyond that, functional programming is, by and large, easier than imperative programming.
You say this but dislike even a bit of abstract algebra? I see a conflict here; anyone that likes FP, moreso, typed FP, should be familiar with simple concepts like monoids, functors, and monads. Honestly, I don't understand how someone can write in a functional language and not know these concepts.... they're literally Haskell 101. That doesn't even scratch the surface of what FP can do...
"Honestly, I don't understand how someone can write in a functional language and not know these concepts"
I totally support @jowiar on this. I have about 4 years of experience of writing in Scala and I don't think there was even one day when I needed to know monoids, functors and monads :) I have a blurry idea what monads are about but, honestly, I could have done without it.
In my work, when I make code reviews, I emphasize simplicity and readability. And I wouldn't want to see anyone of my coworkers using scalaz or a similar library. It's not true that if you don't like it you don't have to use it. If in a team of developers one person starts to use them, it's eventually going to propagate: After some time someone will have to write code accessing the code written with scalaz, then another person will have write something using that, and so on, and so on. And then it is enough that an experienced developer goes to another project and a new one takes her place (and this is not a rare situation - it's just something that happens every few months). And then the new developer looks at the code and is unable to say what is happening there without learning what all those cryptic operators and weird classes names are about.
The strength of a programming language lies not in how few times you have to push keyboard keys in order to write an application. It lies in how practical it is for a team to collaborate using it. If a team decides that the language is hard to read and hard to learn, they will search for another one. If enough teams decide that, the language dies.
To me, functional programming's advantage comes from avoiding reasoning about state. Everything else is gravy.
Beyond that, it's very easy to go down a rabbit hole of terminology. Javascript programmers write functors all the time, but I don't know if I've ever seen the word "functor" written in a JS context by anyone other than raganwald. They're just functions, and the fact that they return functions is damn useful.
Hmmm in hindsight you're probably right. Though he does refer to libraries using abstract algebra as having "meaningless terms," which doesn't really sound like a positive opinion about the subject. It's important to have the correct terms, and watering them down just creates ambiguity. If software abstractions stem directly from math, there's no reason to hide the fact. But, like you said, it seems like his point is that using abstract math in libraries raises the barrier of entry for new developers and that they shouldn't need to learn abstract math to be able to be productive, not that he personally dislikes the topic.
It's definitely is possible to go overboard with operators, making a codebase very confusing. Case in point: the lens library. Though those are optional, and the best practice is to just use the actual word functions.
If they're just terms, then it doesn't matter that they're terms from abstract math. However, this way there's the benefit that it's super easy to find resources, since you have all of the mathematical literature.
I didn't say it's included, I'm saying anyone capable of a bachelor's degree in STEM is capable of picking it up. And if you think you were a "theory" person, but don't know the basics of abstract algebra, then you have an incredibly naive idea of what theoretical computer science entails. Hint: there's a lot of abstract algebra.
I do understand your point, there is a lot of ego-stroking going on in the community, and I loathe it as well... but this page? This is just a demonstration of Either, one of the simpler types! I honestly think you just saw the word "algebra" and that was enough to alarm you, which I kind of understand as well, but still. This isn't λ-bro stuff. This is one of the good ones.
Understanding for loops and if statements are pre requisites for many tools. Why is understanding some basic algebra, completely unreasonable. Cats and Scalaz allow you to represent things cleanly - surely that's a win. Can they be abused? Yes, of course. But please don't throw the baby out with the bathwater.
> Understanding for loops and if statements are pre requisites for many tools. Why is understanding some basic algebra, completely unreasonable.
That's a slippery slope. When you are staring at a huge stack of accumulated knowledge, you need to decide when you can start ignoring the details past a certain level.
It's not required to know assembly to write code in a modern language, but it helps. It shouldn't be required to know category theory to write code in a modern language either.
But cats and scalaz are based/implement category theory concepts. You would only use either library if you understand what they do and why you need them. Otherwise, it is like using OpenCV to search through database and then complain about complexity and overhead
The problem, though, is the context. The article I linked presented the assorted category theory tools as things the community 'really should be using'.
And doing that has an effect, in that it increases the number of concepts that a person needs to grok to be a functioning member of the community -- to be able to dive into open-source code, discuss code with other participants, etc. Adding concepts should only happen when it's clear that the benefits are larger than the costs -- not just for the individual, but for the community. It requires far less knowledge to dive into a Ruby or JS codebase than it does a Scala codebase. And adding "category theory" as a necessary prereq for being part of the Scala community just makes things incredibly exclusive.
What would you suggest? Banning everyone who likes functional programming off the internet, so they do not post thing about things that you do not like? To be fair, I have seen very few projects that use scalaz or cats.
The thing is, the basics of abstract algebra are easy to grasp, and you don't need to spend a lifetime studying; it takes a couple hours to get started with these simple (but powerful) concepts.
Why would someone in elementary or middle school need to learn that though? I think anyone in high school could easily learn a bit of abstract algebra.
I hear you. I wouldn't say you need to understand any cat theory. Just be able to read types, and you are fine. Problem is, people aren't used to that,
What with the recent TypeSafe news, this is quite reassuring. I love Scala and would have been devastated if it ended up going into financial backing limbo.
Typesafe/Lightbend continues to back Scala as it always did, no change there. Scala Center is a great additional move to get additional people / resources to work on Scala :-) So while "the recent typesafe news" are nothing to worry about, the scala center news are very awesome for the entire Scala community :-) (disclaimer: Akka team here)
And optimistically, maybe a way to give some of the other community stakeholders, like Typelevel, a more official seat at the table. But I haven't heard anything about what Typelevel feels about this, so maybe I'm speaking too soon.
The negative opinions were overblown. A lot of them were complaining that something that's impossible in other languages is hard or difficult to learn in Scala, or real but superficial problems. The language has plenty of warts and there are some bad libraries and tools out there, but all these things are superficial and transitory; the core is sound and I expect all successful future languages to have a very similar design.
To give an equally sincere response: dynamic languages are dead, and languages without higher-kinded types are dying - they're too useful to do without, and Scala shows they're practical. We're still discovering more things you can do with Scala.
I think the language we'll be using in ten or twenty years' time won't be Scala - or it will be a Scala that's changed unrecognizably from the Scala we know today. But I think that whatever language it is will have a strong type system with type inference and higher kinds and some kind of typeclass functionality and probably also traditional-OO subtyping (failing that at least Go-style delegation), will allow method calls without brackets and concise lambdas with something like "_", will have a culture of not using C libraries via FFI, and will be strictly evaluated. If you wrote a clean new version of Scala today there's a lot you'd leave out (you might end up with something that looked rather like Ceylon), but I think those things are here for good. Maybe it'll be Idris, maybe something else.
I agree completely. It might be interesting to spell out what that this core is: it is the successful marriage of ML-style functional languages with class-based OO.
The core idea of ML is this: strongly typed with parametric polymorphism and full sum and product types, type-inference, first-class functions, first-class state, (almost) first-class modules. Exceptions as first-class non-local control mechanism.
The key innovation Scala have over its predecessors Ocaml and F# which also attempted this merger is that Scala recovers ML-style functional programming as a special case of OO programming, while the predecessors when the other way, which didn't work so well.
Much of Scalas complexity comes from trying to encode ML features using subtyping and class-based OO. So I think the jury is still out on whether Scala represents a successful marriage. There are other ways to add first-class modules (objects) to ML, the most interesting is probably the new "1ML" paper. Personally I would rather have OOP encoded in a functional language and not the other way around. This would avoid many of Scala's warts such as a nominal core (e.g. Function1-22), methods not first-class, pattern matches that runtime cast etc.
Depends on the criteria of success. Maybe OO and typed FP can be merged successfully in more than one way.
I agree that that 1ML is interesting. Some of the warts come from JVM/Java compatibility. In what sense would you argue that OOP can be successfully encoded on top of a typed lambda-calculus? Finally, the relative simplicity of DOT [1] indicates that Scala's core is not that complicated.
The criteria of success for me is simplicity and generality. The part of OOP I am interested in is first-class modules, being able to package up types and functions acting on those types into records.
Dotty is not the core of the current version of Scala. It does look interesting, but personally I would prefer an approach that does not use subtyping. My point is only that Scala does not represent the last word in FP+OOP.
I didn't say Scala represents the last word, I said it's a "successful marriage" of "ML-style functional languages with class-based OO". I implicitly assumed that subtyping is part of of class-based OO.
I expect Scala to gravitate more towards DOT in the future, as early mistakes in Scala's design will be ironed out.
It might be interesting to see how the form of OOP that you are interested in can be expressed in DOT, for example by removing subtyping form D_{<:}. Have a go at it!
I didn't mean to imply you had said it was the last word. The form of OOP that I would be interested in would probably use row polymorphism instead of subtyping. It might indeed be possible to encode this in DOT.
I think understanding the expressivity of the DOT approach vis-a-vis the more conventional 1ML approach should be interesting.
1ML is based on F-omega, which has higher kinds but not dependent sums or products, while D_{<:} has path-dependent products but no apparent higher kinds, if I understand the paper correctly.
Interestingly though, Scala's model is basically Javascript, with a sound type system. As much as classes have been discredited as the fundamental structure of a system's architecture, they still prove their usefulness in Java and all the dynamic languages, and now a new crop of languages, like Swift, Rust, and Kotlin. So, I'd suggest that the Scala route may well be the path forward, especially post-DOT.
True, there's nothing in Scala that's like the dynamic inheritance/delegation of JS. But I'd say that in practice, people use Javascript's prototype functionality to implement static class- or mixin-based OOP.
Well sure, but I'd say Scala's model is more similar to any of Java, Python, Ruby or C++ than it is to Javascript. So I don't see that it's a "Javascript model".
Not true. The rise of Scala itself is probably the clearest counterexample; there is no socioeconomic case for it, it's succeeded purely on its technical merits.
Quite a bit of Scala's design was influenced by the goal of Java/JVM compatibility. This was clearly a purely "socioeconomic" decision. Without this compromise, Scala would never have taken off.
I disagree. There are any number of failed JVM languages, and no non-JVM equivalent of Scala that I'm aware of. The biggest effect of the JVM on Scala was a "free" high-quality (and high-performance), multithreaded, garbage-collecting runtime (contrast with OCaml) which is an engineering-driven decision.
What talks are you referring to? Can you tell me what I have written that you disagree with? The first 1ML paper was last year, that's not "old" relative to Scala, which first appeared in 2004.
For instance in http://www.infoq.com/presentations/data-types-issues at 26:26 the topic of mapping modular to functional vs. functional to modular is presented. It wasn't some arbitrary decision, but based on the lessons learned earlier.
Since then, DOT has been proven to be sound, dotc can bootstrap itself, and large subsets of "Scala 2" can already be compiled with dotc.
Dotc also sports a very interesting design with its mini-phases + phase-fusion approach, denotations, less desugaring, and a more immutable AST. The speed improvements have been very promising even though no performance tuning has happened so far.
TASTY is on track as a standard IR to express code as an entity that can be produced by different compilers and compiled to different targets like (JVM, JS, ...).
SBT recently gained a compiler interface to drive compilation with dotc, and dotc gained support for producing Scala.js IR.
What happened to 1ML since the paper was published?
He is comparing Scala to OCaml, whose module system dates back to 1995, not any current research. My point was that Scala is not the only way to do first-class modules. Subtyping is my biggest issue with Scala and he admits that (variance) is "hard to get your head around" in that talk. 1ML is a research project, which probably would get a lot more funding if it talked about "objects" and "classes".
It's not the only way to do first-class modules, it's just the only currently known way where both modular and functional sides are actually useful without introducing completely separate sub-languages.
Of course it's a lot harder than building one side and then claiming that the other side sucks because the whole paradigm sucks (but we still wanted some research grants (looking at you, OCaml)).
Scala's OO+FP approach allows me to appreciate both sides where they have their strengths. This is in contrast to languages like Haskell which can declare that FP is the best thing ever (because that's the only way to write programs in Haskell ("but, but, you can do these terrible contortions and write things that look like OOP in Haskell" – let's just skip that point, it doesn't add anything to the debate)).
it's just the only currently known way where
both modular and functional sides are actually
useful without introducing completely separate
sub-languages.
This isn't true anymore and 1ML was my counterexample. I sympathise with your point that Scala is a working shipping product.
I prefer to try that myself before coming to such a conclusion. Where is the 1ML compiler? I only found some outdated proof-of-concept 45kB interpreter with version 0.1.
somebody once said that objects vs modules is like working class vs ruling class: objects do all the work, modules get all the credit.
In case people were wondering, the 1ML project can be found here: https://www.mpi-sws.org/~rossberg/1ml. The key idea here is that ML-style modules (which in the past were though to need a separate module language) can be coded up into System F-omega (the "workhorse of modern compilation").
I don't think any of the 3 main guys behind it (Russo, Dreyer, Rossberg) works on modules all that much right now. That may explain the relatively tardy speed of development.
I actually find non-fatal exceptions to be a poor fit in Scala - much better to have errors be values. I predict successors will not bother with exceptions at all. Even for Java interop, in a language with union types all you'd need is a special marker type Thrown(t: Throwable). I'm kind of surprised Ceylon didn't take this route.
Exceptions are probably the least canonical part of (the sequential part of) the ML-Scala continuum. Maybe it will fall by the wayside. With HKTs monadic error handling becomes relatively painless. I have no strong opinion on this matter.
I think sequential programming won't be the deciding factor. Error handling becomes more difficult when concurrency is involved. It's unclear whether the monadic approach scales to concurrency. The Erlang experience seems to suggest that more complicated approaches might be appropriate.
Maybe. Tooling is coming in leaps and bounds for dynamic languages. Codebases are also trending way down as companies move to "microservices". Both bode really well for dynamic languages.
I actually think the 10 year future is going to be dominated by languages where you don't need to think in terms of types at all but you get all the benefits behind the scenes.
> Tooling is coming in leaps and bounds for dynamic languages.
Yeah, but it tends to involve reimplementing a type system - if anything this is a sign of how mature and easy type systems have become. I think we may see languages with optional/best-effort typing for a while longer, but there will be no more pure dynamic languages; every new language will at the very least have a python3-style language-level standard for how to record type annotations so that tools that work with types can interoperate. And once you have that, it seems stupid not to have the language check at least the simple cases.
> Codebases are also trending way down as companies move to "microservices". Both bode really well for dynamic languages.
I think that's effect rather than cause. If you're writing a big system in (say) Ruby, microservices become the only way to achieve the required level of isolation.
> I actually think the 10 year future is going to be dominated by languages where you don't need to think in terms of types at all but you get all the benefits behind the scenes.
I thought that for a while, but types a really useful tool for thinking with. In Scala I could write Python-like code that wouldn't need any explicit types (they'd all be inferred), but instead I use types to help express the design.
I think the most cutting criticisms of Scala are the ones that pointed out your exact quote here: "If you wrote a clean new version today, there's a lot you'd leave out"
That said, the Scala community seems to be moving away from many of the dumber ideas (mostly along the lines of respecting readability over 'expressiveness').
> That said, the Scala community seems to be moving away from many of the dumber ideas (mostly along the lines of respecting readability over 'expressiveness').
I don't think it's even that. I mean the symbolic method names are an unfortunate piece of community history that can't die soon enough, but other than that a lot of warts are just failed experiments or things that we now have a better way of doing or even things where a compiler flag gets you the right behaviour but the default is the bad backward-compatible way of doing things - Scala grew organically to reach the point it's at now, and it shows.
If anything I think Typesafe/Lightbend listened to the critics too much and put too much effort into backward compatibility. The best thing for the language would be to remove a lot of old features, many of which are barely used in any case. On the plus side they've taken the positive step of modularizing the standard library and moving pieces out of the core.
The one negative that is 100% real, and is slowing language adoption, is the problems with its community. Everything else is pretty easy to fix IMO.
There are a variety of 'factions' in the Scala community, with very different agendas. They disagree about which parts of the language are useful, and which ones to outright cut out. Research language focus, vs practical use in the enterprise. From easy to learn to (here, to learn Scala, first learn category theory and Haskell). From wanting a large language with batteries included, to wanting a really tiny standard library. And many in those groups not only hate each other's guts, but will very happily claim that whatever other groups are doing is not worth using under any circumstances.
With a situation like that, it's very easy to find criticism about every single piece of the language, and every single company that has supported Scala in any way. Twitter Scala, Scalaz scala and Typesafe Scala are all different: I have never seen this level of division outside of the olden days of LISP. So of course everyone complains about it becoming too big: If you cut some features of the language, entire factions disappear. The complaints are purely political.
So it's very easy to get ideas of doom and gloom from Scala, when it's being used in production in a lot of places, to solve very real problems. Akka, Scalding, Kafka and Spark power a lot of things, but the infighting has to end, and that will take a lot of attitude changes everywhere.
I think this is even overblown. Verizon makes heavy use of Scalaz, but not all projects are done in 'scalaz' style. This works out OK. There isn't always time to onboard new folks to a fully functional style (which is the overall company culture/preference), 'typesafe scala' is certainly very pleasant to work with.
That's one of the strengths of the language. You can be productive from day one writing 'java++' style scala, and slowly move to the functional side at the speed and level of 'purity' you desire.
"There are only two kinds of languages: the ones people complain about and the ones nobody uses."
-- Bjarne Stroustrup
I've seen many programming languages become popular (or fail to) over the years and this quote has always seemed to hold true.
With Scala there was the first wave of people working in it that claimed that it was a grand panacea for all the problems in software (as is always the case with new languages).
However, it wasn't until people started to really claim that the language was certainly doomed that it clearly was a success.
In general I have found that Stroustrups quote is, counter-intuitively, a good way to determine whether the next hot new language will really stick or not. Furthermore, I have to admit that even some of my favorite programming languages fail this test and honestly these languages are extremely unlikely to ever achieve mainstream success.
And if you really think about it, it's not so counter-intuitive. Programming languages don't show their real limitations until you are very deep in a large complicated project. The frustrations of the beginner are never the same as the frustrations of an expert and only an expert can really feel that a language is "doomed". This sense of "doom" is often just the realization of the once language X zealot who now sees that this new language is not a true panacea. But this moment of disillusionment is also the moment an idealized programming language has proven itself a practical one. The more people that feel this loss of faith in their favorite new language, the more people are building large, practical, real-world software projects with it.
If you look at stackoverflow surveys you'll notice that those using Scala are amongst the happiest users of any language and are very willing to continue using it. It's growing slowly but steadily and Spark is also driving adoption. With some exceptions most criticism seems to be coming from those who haven't had much exposure to scala and trot out the same tired old material time and again. Once you grok scala it makes more sense. It's a bit like complaining that Mandarin is hard to understand if you are English. It takes time to learn but it's worth it if you stick it out.
Nothing against Scala here at all. But I've heard that if you try to learn Mandarin as an adult westerner, you will never get to a point of conversational comfort in the language, let alone sound like you have native mastery of it: it's too complex and has tonality that westerners don't have anything to compare to. So this is a particularly poor choice of analogy if you're trying to make a case for Scala.
People say that about Mandarin, and really language learning as an adult in general, but the truth is most people don't put themselves in the situations to learn a foreign language nor do they exert the consistent effort required to acquire it.
The biggest problem with Mandarin is not the tones but the fact that you can't fall back on written language to help you out with anything at all. It's like learning two difficult languages at once. If you really want to learn it, though, it's entirely possible. All it requires is a human brain and consistent effort.
Actually, it's a good analogy because Mandarin's difficulty is also overblown. It's hard, yes, but I have no trouble being understood by the Chinese, and any Westerner living in China who makes the effort inevitably comes to that point as well.
Yes, scala is big and complex, but IMO, it remains by and large coherent. That might be the ultimate reason why scala developers from all kinds of different backgrounds didn't abandon it even after facing many of its shortcomings and bugs.
I think scala will remain a very favorable language for any team that needs all of the following three.
1, ability to do functional programming with a strong(ish) type system
2, ability to do actor based parallel/distributed programming
Can you specify what you mean by too big or too complex? As far as concurrency goes it simpler than Java. You have Futures and Akka, that's it.
You should read Java Concurrency in Practice, the entire book is explaining all the concurrency methods they have. And thats already a very good book (one of the top in my list). In Scala, I just need to read through a few pages of documentation and the rest is simply practice in mixing and matching patterns and a few configuration settings.
I use Scala frequently for internal microservices development. I also write a fair amount of Clojure. I believe that a lot of good and robust software is built in Scala, but that it suffers from a kind of complexity fetishization that offsets, for debugging purposes, a lot of the positive effects of robust typing. Compiler speed is also a problem for me personally.
Given my choice of options, I will choose to build new systems in almost any other JVM language, including stock Java 8. However, the language isn't going anywhere, and is worth learning. I find that feedback is fastest in Clojure. Your mileage, as always, may vary.
Clojure is growing steadily. It just doesn't have rocket ship growth. It is certainly mature enough to use in production systems and many companies do.
It has a few adoption issues though. The strong typing crowd don't like its dynamic typing, it's not a c-like language, and it doesn't have a large company backing it.
F# has the perception in the industry as a testing ground for good ideas for C# and a second class citizen.
Scala isn't “becoming” too big and too complex. It already is. But, if you dig deep enough into the implementation details, you will find that pretty much every production-ready language is too big and too complex.
I wonder if people are having the same experience I had when I first encountered Scala.
I was the newest dev on a team that had an existing code base that was built around Spray, Akka, and Slick. And they wrapped an Actor around Slick so it "fit better". I've written tons of asynchronous code (Java, C++, Linux, and Windows) so that wasn't what got me. It was just how the original devs spent so much time making everything was an actor. Even in areas that didn't need to be.
At that point having never seen Scala, I just attributed it to the language vs the use of libraries.
Having now spent time with just the language, I like it. Just my introduction was jarring because of all the "everything must be asynchronous" nature of the code base I "learned" on.
You'll get a completion certificate for free. What you won't get is an accredited certificate that you can show off on LinkedIn, to potential employers/investors etc.
The thing is that most of the things that people don't like with scala will actually disappear without doing anything.
One of the greatest problem is compilation time, however since computers get better this will be reduced.
And still the team is working to bring more Scala <-> Java interop and a newer compiler which will bring compilation times down, too.
Improved incremental compilation of #scala in the
making yields 10-40x compilation speedups:
https://github.com/sbt/sbt/issues/1104#issuecomment-188529825
and
After turning on the java8 backend in Scala, our
bytecode size decreased by 50% and compile times
decreased by 75%. Amazeballs!
Actually after I truned java8 backend on I got a reduced compile time. (after changing code).
Actually we use Scala as a Improved Java, so we likely don't use so much "heavy" things.
looking forward to the reactions from people regarding compilation times when sbt 1.0 and scala 2.12 land. hope a broad spectrum of code get's these compilation time improvements.
Sure, but Scala 2.12 replaces anonymous classes with Java 8 functional interfaces. Vast reduction in generated bytecode as a result, thus, not only faster compile times but also perhaps better run time performance (though hotspot, miracle of engineering that it is, does a pretty good job of optimizing most anything you throw at it).
Saying that, Scala 2.12 will still be relatively slow compared to say, C, Go, Java and similar languages not high up on the abstraction ladder. The real compiler speed up will be coming with Dotty (Scala 3). Martin Odersky estimates that Dotty will be up to 4X faster than present day Scala.
I've been working with scala for the last 2 months and I like it. The things i don't love are: the long compile times (lots of time wasted here), the lack of a community accepted/official coding practices - if sbt by default would run scalastyle on packaging/testing, that might help on enforcing a consistent set of good common practices on community code.
It's just been released, but I like the idea of adopting a tool in the likes of gofmt. Coding style discussions are such a bike-shedding magnet, it's good if you can just not think about it (and are able to follow it anyway, regardless of IDE).
I'm the author of scalafmt, happy to hear there's interest in the project! Scala has a very rich syntax. I remember first coding in Scala being unsure where to put newlines or how far to indent. My hope is to make that problem go away.
Scalafmt is indeed new, just released 0.1.0 last week. However, it's already become useful for me with the IntelliJ plugin. Scalafmt is practically my full-time job for 2 more months so you can expect it to get more mature soon. Feature requests and bug reports are very welcome.
What would be also great, would be a way to announce Scala Jobs regionally. Actually we are still looking for a good way to find job canidates, but since we are a small company it's really hard to find people especially scala developers.
I would guess that your problem is more geographical. Mildly amusing anecdote - a friend of mine went to an interview for a dev position (without doing proper due diligence) at a company just outside of London, but the location was quite isolated - he ended up on a farm complex where this company had their office. He said he could hear actual farm animal noises next door during the interview and these people were desperate to offer him the job, but for obvious reasons he declined.
What are the obvious reasons? Is a farm a disreputable place to work in the UK? Mostly it's confusing, not concerning. I would imagine a farm is harder to keep afloat than a startup.
We are in the south of Germany and the company I'm working for is named 'envisia GmbH'.
And that's the problem. Offering jobs on the internet (even for money) is aweful and expensive. Also this region doesn't has many programmers so it's really hard for a small company to get some people.
We Scala people have gotten a lot of mileage out of pre-existing scaffolding we've inherited from the JVM ecosystem, as well as newer pieces that grew up independently of Scala:
- Both Maven and SBT make it really easy to add third-party package repositories. Whether this is a good thing or not, this somewhat obviates the need for a centrally managed package registry.
- Many organizations run an internal "repository of repositories", such as Artifactory or Sonatype Nexus (which is the product that backs the OSS Sonatype repository). Such installations decouple management of package repositories from projects that use such repositories, which even further removes the need for a central authority.
- SBT can pull in packages directly from GitHub or any other Git-accessible repo. While this isn't used much, I suspect that in many ad-hoc setups (such as my personal projects) there's a lot to be gained from not having to bother with "properly" publishing packages that are not meant to be published.
- It is also incredibly easy to publish artifacts of Scala projects to Amazon S3, Bintray, even GitHub pages. There's a lot to be said about self-managing a small piece of infrastructure as opposed to handing over control to some other, possibly disinterested central authority.
I'm sure that not everyone will agree with all of the above, yet combinations of the above approaches all exist in the wild, making centralization less desirable and/or necessary.
Edit: To clarify what I mean -- http://underscore.io/blog/posts/2015/06/10/an-introduction-t... exemplifies, to me, what is wrong in Scala-land. If Abstract Algebra is prerequisite knowledge for tools I 'really should be using', something has gone horribly wrong.