I mean, I'm so used to work in Java which has the same behaviour, so I may have been totally unable to see the problem due to the familiarity of that code, maybe, but I still don't understand the problem.
First, let state the basis: the meaning of the + operator is overloaded. For numeral types, it makes the sum of the operands. For string, it concatenates them. These are two different meanings in different context.
For the numeral types, it will not protest if you sum up a float with an int (and I think most programming languages won't either). The rules of the language are quite clear: the int would be coerced to a float, then the two float be added, and a float will contain the result.
For the string operation, you could force the operands to have all the same types, but you could also practice the same kind of coercion: convert everything (yes, it works for every object of any sort) to a string, then concatenate.
OK, it may be odd mathematically, but let's see it for what it is: a very handy syntactic sugar in the form of an overloading of the meaning of +, which obey simple rules, and thus have no potential to mean something else than what the programmer meant.
The thing is, it's not just an overload; it's an overload and a typecast. Those are two separate questions, and OP is really complaining about the typecast, not the overload.
The ints vs. floats thing is kind of a red herring. In that specific case, at least you're still talking about numbers. The argument against having contagion for 5 + 5.5, and overloading for "5" + "5.5", but throwing an exception for "5" + 5.5 is that the first is a pair of numeric types, the second is a pair of strings, but the third is a pair of unrelated types. You can say white is lighter than black, and a feather is lighter than an anvil, but you can't say that white is lighter than an anvil because that's nonsense.
Incidentally, this is why, say, Python and Ruby (and I imagine Lisp) people insist upon the distinction between dynamically typed and weakly typed. If I fire up a node.js repl, I find that '50' + 5 is '505', but '50' - 5 is 45, and am reminded why JavaScript drives me nuts. In the extreme you get things like [this](http://phpmanualmasterpieces.tumblr.com/post/33198366857/lay...) and [this](https://www.destroyallsoftware.com/talks/wat). I realize that Java probably doesn't do anything near that bad, but the point is that it's a question of balancing convenience vs. error prevention, and that we're talking about a very strictly-typed language which will naturally attract people who tilt toward wanting the error detection, and even dynamic languages that swing the other way on most things often find that you don't need to make your math operators cast numbers into non-numeric types (and risk the resulting runtime errors) to have convenient string handling.
In fact, I think Ruby's string interpolation is actually a better syntatic sugar than Java's `+` cat overload. Consider that I want to print a string along the lines of "1 + 5 = 6" but with arbitrary integers. Here's how it might look in Ruby:
puts "#{x} + #{y} = #{x+y}"
It's hard to imagine a syntax much better than this, especially for short strings, since the shape is basically identical to the string it's creating.
Python 3 is less pretty, but still reasonable:
print("{} + {} = {}".format(x, y, x+y))
print(x, "+", y, "=", x+y)
Meanwhile, neither Ruby nor Python makes you declare types as a rule. In fact, one of the non-backward compatible changes in Python 3 was to make it so that 1/2 returns .5, since that's how numbers work and it's probably what you meant (there are still ways to specify integer division if you want). However, in either language, if you write "5" + 5, they will raise a ValueError because that doesn't actually mean anything, is likely to be a bug, and is always easily rewritten in a concise but less ambiguous way. Speaking of which:
> have no potential to mean something else than what the programmer meant
is just false. There is a very obvious way for it to mean something other than what the programmer wanted: When the programmer forgot to cast a string to a number and tried to add it to a number. This is particularly a risk in dynamic languages or those using type inference (it's harder to pull off in languages where you have to explicitly declare all your variable types, which may make it a non-issue in Java speficially).
I think you're mixing casts and conversions. At least in the parlance that I'm used to, you can only cast a number to a String or a String to a number in a memory-unsafe language (and it's rarely what you want and a bit dangerous). When you do this, the system will take your word that the data is actually a String and interpret those literal bits in memory as one. In a memory-safe language, this cast would, of course, likely raise an exception. In Scala, you can cast any object to any type with that object's asInstanceOf method.
In Scala (and many other languages), you can convert numbers to Strings and back. In fact, any object can be converted to a String via that object's toString method (which every object has as a quirk inherited from Java). Strings can also be converted to numbers via toX methods (e.g. toInt, toFloat, etc.). Of course, these conversions will raise exceptions if the content of the String does not match the format of the numeric type you are converting to.
This all matters because it is how the + method on String works: it doesn't (unsafely) cast its argument to a String. Rather, it (safely) converts it via its toString method. You may dislike the idea this method exists (I certainly do, and wish it could be deprecated now that Scala has String interpolation), but it is just a method that happens to be defined on the standard library's String type and not a major defeat of the type system.
You start with an integer and end up with a string. The intermediate steps required to get from one to the other aren't particularly relevant to the point I was making.
The string concatenation API for Java IS terrible, but see modersky's post for the reasoning.
Just don't mistake a bad API for some kind of belief that Scala in general idly converts between types. The weakest point is APIs that use methods defined in java.lang.Object (e.g. toString and equals); they can universally use these methods without restricting the type.
Note that if you define a type as a knowledge of what actions can be performed on an object, then no type safety has been lost; toString() is universally available, though it may not do exactly what you want.
I mean, I'm so used to work in Java which has the same behaviour, so I may have been totally unable to see the problem due to the familiarity of that code, maybe, but I still don't understand the problem.
First, let state the basis: the meaning of the + operator is overloaded. For numeral types, it makes the sum of the operands. For string, it concatenates them. These are two different meanings in different context.
For the numeral types, it will not protest if you sum up a float with an int (and I think most programming languages won't either). The rules of the language are quite clear: the int would be coerced to a float, then the two float be added, and a float will contain the result.
For the string operation, you could force the operands to have all the same types, but you could also practice the same kind of coercion: convert everything (yes, it works for every object of any sort) to a string, then concatenate.
OK, it may be odd mathematically, but let's see it for what it is: a very handy syntactic sugar in the form of an overloading of the meaning of +, which obey simple rules, and thus have no potential to mean something else than what the programmer meant.