One of the goals of Project Amber [1][2] is to move the language towards a more data-oriented programming model. With Records, patterns, sealed classes, etc., it should feel much less verbose over time. And unrelated to your concern but addressing some of the learning overhead, see Paving the Onramp. [3]
Any word on when some of project amber features will come out of preview? I get excited each JVM release for some of those features, but it seems like most of the releases the preview count just gets bumped, and a few more get added to the preview holding pattern.
Text blocks, var, records, sealed classes, and pattern matching in instanceof have been out of preview for some time, but two more features -- record patterns (https://openjdk.org/jeps/440) and pattern matching in switch (https://openjdk.org/jeps/441) -- are about to come out of preview.
Is there any work to make records actually usable out of the box?
Things like copying or creating derived records are a huge pain (or slight pain with code generators) while other languages have solved this long ago (even JS and C#).
Sure, but you could refer to the lineage of a dozen languages. Most of the world runs Java and evolving it takes care and consideration not to alienate a massive user base and ensuring that it evolves in the right way, not quick responses to fashions and trends.
Scala is super complex, introduces breaking changes all the time, is super slow to compile, multi-language projects are also complex, and the decisions that are right for Scala may not be right for Java.
The "super complex" and "introduces breaking changes all the time" comments are unsubstantiated FUD. Scala has evolved a lot in the last few years, particularly in the area of binary compatibility. It's a wonderful language and I can only recommend others try it. This from a programmer very happy with Scala.
And this is from a programmer that is not happy with Scala. Every single time I've upgraded the compiler there is a breaking change. They don't strictly follow semver. Most recently upgrading from the 2.11 to 2.13 compiler they made breaking serial version uid changes (I know don't use java serialization, but that wasn't my decision) and none of it was noted in the release notes.
When it comes to super complex just look at any of the type signatures of the standard library for collections:
def ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[IndexedSeq[A], B, That]): That
Comparing this to Java it is "super complex". IntelliJ can't even figure out the types sometimes.
Scala 3 came out not too long ago and it fixed plenty of shortcomings. I do recommend giving it one another try.
Also, the reason why Scala’s collection has such complex signatures is because it is hands down the best collection lib out of any language I have used.
Right and I don’t want to have to deal with half the community being split. Or half my coworkers doing it one way. These are just things I don’t want to deal with. I said my reasons were petty! But they’re my reasons.
I don't know about the serialization issue but I don't doubt you had it.
> def ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[IndexedSeq[A], B, That]): That
I should point out that this is the Scala 2.12 and and earlier signature. `CanBuildFrom` is gone from the Scala collections since Scala 2.13. In fact, the collections were redesigned for Scala 2.13 primarily to simplify method signatures, following community feedback.
And then when a dependency of one of your dependencies is broken agains the latest version. I know some of this has been cleaned up, but it is one of the main reasons I no longer use Scala - I have a rule about how long I'm willing to spend on build issues vs actually writing code, and Scala was always on the wrong side of that.
re: Complexity - At least the signatures for the core collections have been cleaned up a fair amount. That said, the richness of the type system and the prevalence of operator overloading always made it feel like a language you could be really productive in once you knew the language and the current codebase really well, but was really hard to just read through unfamiliar code and know what is going on.
> And then when a dependency of one of your dependencies is broken agains the latest version
How is this any worse than Java? My most vexing dependency-hell issues have involved breaking API changes to Hamcrest matchers and Apache Http Client; more recently Jackson-databind. All of those are Java libraries, brought in via transitive dependencies, usually from Java libraries.
There was definitely a large and vocal part of the community that wanted that, but I think early on there was a lot of tension between Scala being "better Java" and "Haskell for the JVM", and that probably hindered a lot of adoption.
Sure, but what about when you want to pull a Kotlin library into a Scala application? It works, but usually only works well if the library author limited themselves to the subset of the language that interops with Java (the language).
More features in Java (the language) gives other JVM languages a larger set of tools to design interop support around, while letting them remain a place for these features to incubate without the headache of the JEP. Sometimes these language-level changes may come with modifications to the JVM to support them as well, letting other languages clean up their implementations.
It’s terrible. But at least I rarely have to touch the config. I guess the silver lining is it’s so bad we don’t use it for anything besides dependency management so configs are simple and just copy pasted between projects and rarely touched.
SBT is the worst thing about the Scala ecosystem. Just stick with Maven (or Gradle, if that's what you're using), enable the Scala plugin, and start trying Scala in more flexible leaf areas of your program (e.g. integration tests, ancillary tools, data migrations). See if it feels right for you.
[1] https://openjdk.org/projects/amber/
[2] https://inside.java/tag/amber
[3] https://openjdk.org/projects/amber/design-notes/on-ramp