Nice to see pragmatism playing a role here. A big problem with Java has always been the "design by committee" nature of it where we end up with some theoretical best case that satisfies everyone's egos but turns out to utterly suck when used in practice.
From another point of view ... it's a shame we needed so many years to come up with "let's do it like C#".
I think checked exceptions are just ahead of their time. They make sense when you statically check your code against contracts, which is why they showed up again in the spec# research project. http://en.wikipedia.org/wiki/Spec_Sharp
Java got them the wrong way around - unchecked should have been the default, with checked available when you need them.
Because Java does them the other way around, you either get code bloat from the large numbers of exceptions you are catching and handling subtly differently or you have a generic catch(Exception e){..} block. Neither are ideal.
Frankly this is one complaint about java I have never understood. I, myself, have missed checked exceptions in C# when I have had to port Java code to C#.
I don't think this is the case here. The question isn't whether you "add" or "remove" something. The question is whether the old programs will still work or not.
If I understand checked exceptions correctly, removing them will only allow more programs to compile. The old ones will still be valid as their (now optional) try-catch/finally blocks won't go anywhere.
throws RuntimeException and try / catch around RuntimeException would still be valid, so I can't see how changing all checked exceptions to runtime shouldn't affect the flow of the code.
If users wanted to they could declare 'throws TheirParticularRuntime' all the way up the stack and it would work like checked exceptions, right?
Because a huge proportion of the time there's nothing useful that you can actually do when you catch an exception, other than print the stack trace and exit. And because people are forced to catch the exception instead of letting it bubble up and stop the program, half the time they just swallow it and ignore it, which is even worse.
try {
// do stuff
} catch (TheCheckedException ex) {
// ignore it
}
then the code continues even though it could be in a bad state if an exception did occur.
It's a much better solution to let it propagate up the call stack until it can be dealt with appropriately (e.g. by ending the application, giving a 503 error on a web page request, etc.) Unchecked exceptions will propagate up the call stack by default if you don't do anything, making it harder for people to write bad exception handling code.
I like checked exceptions because you can't not deal with the error condition. Swallowed exceptions is a code smell that any half-decent developer will notice, whereas failing to check for some return value is much more subtle. Java allows you to have both, since you can design your exceptions as unchecked, or rethrow a checked exception wrapped in a runtime exception.
Yes, this! The most irritating debugging I've had to do in Java code is when someone catches an exception, but fails to handle it effectively, resulting in another exception being thrown later. If you don't know how to handle it effectively, just add the 'throws' clause. It's less typing than try/catch, so I don't understand why people choose to make extra work for themselves and for me.
I see a lot of comments here that seem to assume catching the exception is the only option.
Exactly. In Eclipse it's so ridiculously easy to make the method pass it on by adding the throw to your method. I find checked exceptions really useful every day :)
They tend to force you to handle an exception at the wrong level. If you handle them correctly you still have to catch and wrap them at a low-level to not leak an implementation detail in methods signatures. IDE tend to be overly specific on the type of exceptions a method can rise when autofixing methods decls.
In the end you write boilerplate and fight the IDE.
Finally a decision was made. The decisional process is too slow, this is why Java in 2011 still misses those features that languages like C# already support since years.
Excellent to have this finally resolved. I'm sad to not see something a little closer to the ruby block syntax like in some of the early proposals, though. Special syntax for lambdas as the last argument to a function lets you write very readable code for a host of useful cases.
Nota Bene: A Ruby Block is not the same thing as a Ruby Lambda, although conversion between them is easy. Yielding to a block has somewhat different semantics than calling a lambda, especially with respect to non-local return.
I think stating it that way trivializes the amount of work that _has_ gone into lambdas.
There's been a lot of other, more fundamental things discussed over that 2-3 year period about their implementation; way more than syntax.
Yes, to some extent there were debates about how the implementation would work, but also about how they would interact with all the various other features of Java, most of which were not constructed with lambdas in mind: generics, exceptions, inheritance, and so on, not to mention lots of fights over the syntax.
For instance, there was a long battle over whether () would be enough to invoke a lambda, or whether some sort of apply() or invoke() or .() method would be required, and a lot of the problems with () come down to differences between the way fields and methods are inherited in Java, esp. w.r.t. shadowing. I don't even know how it all came down, but it gets nasty, a lot of this stuff just wasn't designed with lambdas in mind.
I see that the proposed lambda expressions can omit type annotations on the parameters. Does that mean that Java 8 will have at least some limited form of type inference?
If they will be based on anonymous inner classes, then yes, they'll capture the variables from the surrounding scope. Those variables however will be 'final', which means that you can't re-assign them, even if their values are still liable to side-effects (e.g. setting fields, mutating collections, etc).
They are not based on anonymous inner classes (the implementation is likely to be much different) but they have the same restrictions as BDFFL_Xenu described.
There's a JVM language out there that initially allowed try-statements without catch or finally clauses, e.g
try{
//for a short time only to decide
}
But one of its despots, five years after the language was first released, changed the syntax to make a catch or finally clause compulsory, e.g
try{
//for a short time to decide
}finally{}
Because I had used these standalone try-statements a lot, I had to go thru all my code and add empty finally statements to them all.
Scala, on the other hand, still allows standalone try-statements. And Scala and C# also have a better closure syntax than that language, and are more worthy to be copied by Java 8.
But one of its despots, five years after the language was
first released, changed the syntax to make a catch or
finally clause compulsory, e.g
Are you the person who believes the Groovy developers are personally working to invalidate your web tutorials so they can discredit your fork of the language?
Of course, Groovy also differentiates itself from Guava
by having a dedicated language syntax, which is no doubt
why the cartel don't want to standardize it. If it was
standardized, they couldn't make some random blogger's
sample code stop working by suddenly requiring all
try-statements to have an empty finally clause.
Perhaps you can educate me on your better reasoning, but the only purpose I can see for an empty try block is to limit the scope of variables in a method. Quite frankly, if you find yourself in need of artificially limiting the scope of variables in a method, you've probably done too much in the method and need to refactor.
Try block are for just that, trying something you know ahead of time might fail. There's no point in using the try paradigm if you don't also intend to handle the expected failure.
An empty try block allows us to repeat variable declarations with the same name in a long stretch of scripty-style code.
> You can always use braces to separate a block without try at all.
You can use braces in Java, but not in that language I'm talking about: it'll throw an error saying "Ambiguous expression could be a parameterless closure expression, an isolated open code block, or it may continue a previous statement; solution: Add an explicit parameter list, e.g. {it -> ...}, or force it to be treated as an open block by giving it a label, e.g. L:{...}, and also either remove the previous newline, or add an explicit semicolon ';'".
The standalone-try looked far more elegant than:
dudlabel:{
//do something
}
Better for that despot to re-enable standalone try-statements in Groovy than petitioning the Java 8 designers that thin arrows look better than industry-standard thick arrows.
> An empty try block allows us to repeat variable declarations with the same name in a long stretch of scripty-style code.
Either I'm missing something here, or this is really as bad as it sounds.
I understand you want nested scopes that end before the method's scope ends, but cannot for the life of me think of a defensible code example. Why not just make them into methods?
In C++ block is often a way to ensure automatic variable destruction. This is important, because in C++ often there is important stuff going on in destructor. For example in RAII idiom.
Example:
...
...
{ //critical section
QMutexLocker locksInConstructorAndUnlocksInDestructor(&mutex);
a = doStuff(a,b,c);
b = doOtherStuff(a,b,c);
c = andAnother(a,b,c);
}
..
..
It would be overkill to make this block a function, especially when code in critical section changes many variables.
I did say scripty code. My typical scripting session from scratch starts with opening a simple editor and starting to type in code. The structure is loose, perhaps it's testing something I've written in another file in a statically-typed language. As the code evolves, I slowly give it more structure by reworking it. At some stage I MIGHT put some code into methods.
But before that stage, nested scopes are useful because I'm far less likely to pass the wrongly-named variable into a function when I'm cutting and pasting code. An example coding snippet...
If I use testdata1, testdata2, result1, result2, etc, I might forget to rename a variable after cutting and pasting, and think a test works when it doesn't.
It sounds like you want support for a bad practice. Each of those code blocks are logically distinct tasks, and it would serve readability to factor them into functions. Further, you're abusing a language construct to do something other than its purpose.
I've never programming in Scala, so I'm going to write Python code that represents what I think is a better way of doing that kind of testing:
def test1(data):
result = SomethingToTest(data)
assert result == result.getSth()
def test2(data, out):
result = SomethingToTest(data, out)
assert result == result.getSth()
out = open("blah")
test2(out, "abcdefg")
test2(out, "hijk")
test1("hijk")
The problem is not that copy-pasted code may be incorrect. The problem is that you're copying and pasting code and expecting it to be correct. The way you're nesting scopes is confusing.
But for a small function (which a lambda typically is), the syntactic overhead of "new Runnable() { public void run()" swamps the actual logic. When I read this version, it's hard for my eyes to find what work is actually being done because there are so many braces and brackets and words. The big deal of lambda syntax is that it lets you focus on the code that does the work (the doStuff) instead of the adapter (the "new Runnable() { public void run()") that the library needs around that code.
it's hard for my eyes to find what work is actually being done
Aside from if a lambda expression or anonymous class is the better choice overall, this just happens to be a case that you're just not familiar with the syntax. Its been years since I've done any work in java, but I read this over some coffee and didn't even slow down for it.
Lambdas will be nice for the situations where there is more boilerplate than implementation, but the anonymous inner class in java isn't going to die.
I think the difference between capturing values at call time and full closures only matters if you are mutating variables, which my closures rarely do, since my code is fairly functional'ish, so for me it does not matter.
I'm more familiar with C++ lambdas, and in those you must specify how you will capture variables (no capturing, by reference, by value, or some mix of those).
From another point of view ... it's a shame we needed so many years to come up with "let's do it like C#".