Hacker News new | past | comments | ask | show | jobs | submit login
Everything about Java 8 (techempower.com)
168 points by mhixson on March 27, 2013 | hide | past | favorite | 179 comments



I'm excited by Java 8 moreso than at least several of the previous versions. Java 7 was especially underwhelming.

The lambdas are the new hotness that everyone is aware of. But the work done on streams is almost equally impressive.

There are several other small additions and reductions in pain that, when taken as a whole, make this update significant. For example, we finally will have a native JodaTime-like replacement for Calendar, Date, and the supporting utilities. I've long considered the need to enforce external thread safety on DateFormatters to be one of the silliest design failures in Java.

Disclosure: this summary was written by a colleague of mine.


I am hacking a while now with java 8 and can say that one of the most appreciated features in my stack are lambdas. as soon as you are used to them (had the first introduction to lambda-style programming with Blocks in ObjC) you miss them so badly when they are not there (C for example).


A year or so ago, I wrote what was effectively a small subset of the JDK8 streams stuff. Incredibly useful, but hard to get right. It's good the core APIs will be part of the JDK so that, like the collections API, they may be adopted with some trust that others will do the same.


As a C# developer now experimenting with Java, I still miss some things even from Java 8. Probably some have technical reasons behind, but...

- Getters and setters, C# style. That is, instead of

    private int foo;
    public int getFoo() { return foo; }
    public void setFoo(int value) { foo = value; }
write this:

   public int Foo { get; set; }
which is both easier to write and makes code more readable and understandable.

- Mandatory exception declarations in methods. I still don't understand why this is necessary. It ends up either leading to stupid errors (see the following example from the article)

    void appendAll(Iterable<String> values, Appendable out)
            throws IOException { // doesn't help with the error
        values.forEach(s -> {
            out.append(s); // error: can't throw IOException here
                       // Consumer.accept(T) doesn't allow it
        });
    }
, or to lots of methods just throwing Exception (bad practice, I know, but it's easy to fall in), or to useless empty try-catch on methods that won't throw an exception (because you've checked it) and don't want to add the throw declaration.

- Modifying external variables in a lambda expression. Just as everyone is used to in Javascript, Ruby, C#... It's pretty useful, even more when you're using lamdbas for common operations such as forEach.

- Operator overloading. When you're using data classes (for example, models from a database) where different instances actually point to the same object, overriding == it's more comfortable than foo.equals(bar). Also, the downside of using .equals is that, if foo is null, you will end up with a nice NullPointerException out of nowhere.

- Events! I'm amazed how easy is to do events in C# versus how hard (and bloated) in Java. Take this as an example: http://scatteredcode.wordpress.com/2011/11/24/from-c-to-java...

- Pass by reference. Just as if you were passing a pointer in C. I find it surprising how few languages implement this feature, which is pretty useful...

And I'm sure I'm missing something...

Edit: added three bullet points.


> Getters and setters, C# style. That is, instead of

I just stopped using getters and setters all together unless I have a real use for them. The most common reasons are: because they are required by some third party library or they are the external interface for whatever the module of code is doing. For all internal classes, if the field needs external access, just make it public.

Most of the arguments I hear for using getters and setters are the result of the tendency in the Java world to over engineer everything. e.g. "What if you want to change the internal name in 8 months? or add validation years from now? or add some logic to the getter? (ignoring the fact that would no longer make it much of a 'getter')"

The only argument that holds water with me for using getters/setters everywhere, because it is actually painful, is mocking field values, but there are ways around that and the trade off in code readability with public fields is well worth it.


I really disagree that C# getters and setters are less readable than a public field. Compare:

   public string Field { get; set;}
to

   public string field;
I felt I had to reply to this comment to discourage this practice for a couple of reasons:

1) By convention anyone reading your code will think this is very strange, and it will force them to spend additional time reading your code to understand why you are breaking such a strong convention.

2) Using properties really will make your life easier when you need to track down a state change in your program. Detecting a state change on a public setter is a lot easier than detecting state change on a public field.

3) In C#, at least, using public fields where you should be using public properties will make creating an interface on existing code significantly more difficult - as interfaces can only define public properties, and not fields.

In short please don't do this. It's really bad practice just to save a couple extra characters, and if I inherit your code someday I'll probably want to strangle you.


1 + 2 are exactly why I do it. If your muddling around with the internal state of another object there should be alarm bells in your brain triggering you to think "should i really be doing this?"; It should look weird.

Hiding fields behind method calls obscures what your really doing. I like that syntax highlighting changing the color on field access and that there aren't parenthesis at the end of the call. This is what I meant by readability, I instantly know that I'm muddling around with the inner state of another object. Figuring out whats really going on with getter and setters is made far worse by the common practice of 'overloading' getters and setters to accept / return different types than the internal state representation.

3 - I would consider defining field getters in an interface poor form. There is a reason interfaces (in Java at least) do not allow field definitions. Defining a 'getter' in an interface is just circumventing that protection and a strong indication the code should be refactored in a different way.


Things consuming an object shouldn't care how it does what it asks, only that the object does it.

If knowing that something is a field rather than an opaque getter is important to the consumer, then the consumer knows too much.

You know you're messing with the state of an object when you do object.Something. That it's being messed with by a method call or a field is immaterial.

The syntax for consuming a field or a property are exactly the same. I don't see how fields ring alarm bells while properties don't.


In C#, properties are handled just like fields:

    class Foo {
       public string Bar { get; set; }
    }

    Foo.Bar = "test";
But I still don't get why wouldn't you use getters/setters or properties. For example, let's say you want to get the number of items in a list. How'd you do it without something like this

    public int Count { get; private set; }

?


I think it's partly a defensive tactic to stop others playing fast and loose with your object state.

I used to write classes and then add fields and auto generate getters/setters.

Now I don't, I just make all the state private and if I find myself wanting a getter/setter I ask myself why.

Too much use of getters/setters is just reinventing the problem that private state was meant to avoid.

So for example rather than:

if(email.getState() == EmailState.SENT) {

I have:

if(email.wasItSent()) {


What if a 3rd-party API only works on Beans? (i.e. doesn't support private properties).


With dependency injection frameworks, you don't really need setters anyway. I add getters only as needed, unless it's a public API.


2 things: 3rd-party API that only works on "beans" and thin object (not your Services or DAOs that use DI heavily).


You can't do much with 3rd-party APIs. For thin objects, a best practice is to keep them immutable by having only getters, but then you generally end up with Builder boilerplate.


The boilerplate getters and setters in java do get old very quickly, and C#'s properties handle this nicely. On the surface, it seems like something minor and superficial, but when you are seeing the unnecessary verbosity of getters and setters day in and day out, it becomes frustrating not to have something like C#'s properties.

I definitely would like to see a similar feature in java. In the meantime, I am pleased with the features in java 8 after being underwhelmed by java 7's new features.


Me too. However, I don't think Java needs much to become an appealing language (for language-geeks).

It's already going to get lambdas. Add some shortcuts for getters and setters (Lombok-style, maybe) and collections literals, then you get surprisingly close to Groovy (and JPA @Entity classes will become 1/6 of their sizes). And just like Strings have special + and +=, Bignums could have some compiler help (as they already do in JSP EL).

It will still be behind C#, but not by much. At that point I could agree that Java and C# have different philosophies and Java will never get true operator overloading, but it doesn't matter much if the most important use cases are covered.


> It's already going to get lambdas. [...] then you get surprisingly close to Groovy

The memory footprints for Java 8 lambdas and Groovy's closures are, er, quite different.


Which is better?


that is why I think java IDEs became so popular, they are almost a part of the language because they do some of this lower level refactoring for you


At least with respect to checked exceptions, my read of modern popular convention is simply to avoid using checked exceptions unless there is a very strong use-case. In our own code, we have transitioned to using predominantly unchecked exceptions with no decrease in code quality that we have yet perceived.

Agreed on assuming the trivial getters and setters if they are not expressly defined. That would be nice.

As for modifying variables in lambdas, you need to have a final reference available, meaning you can't modify a primitive. However, you can accomplish roughly the same thing by using a final reference to a boxed (or if parallel, atomic) primitive.


I don't think there are technical reasons behind, but rather philosophical ones.

- Getters and setters are bad design. In clean object-oriented code, the internal state of a class is not exposed to the outside. Thus, getters and setters should not be encouraged.

- Exceptions: yes, it would be nice to be able to throw checked exceptions in that example. But the rule that they need to be declared is one of the basic design decisions Java has taken with the intention of always forcing the programmer to handle them. It won't change. (You can still fall back to unchecked exceptions if necessary.)

- Modifying external variables: can be done with a work-around, e.g. "final int[] x" to have a modifiable int. Not sure why that is the case though.

- Operator overloading: considered bad practice for high-reliability code. Generally, Java favors explicit statements in order to prevent mistakes.

- Events: some of the listener stuff in the example could be moved to a library, making it smaller. But still a missing feature in Java.

- Pass by reference: this is normally used as a workaround to create methods with multiple return values. In Java, one could again use the wrapping-array-workaround instead as in the external variables point. I'd love to be have methods with multiple return values in Java.


Operator overloading: considered bad practice for high-reliability code. Generally, Java favors explicit statements in order to prevent mistakes.

Considered by whom?

The example gjulianm gave, where == is safe if nulls are flying around but calling .equals() on a null expression results in an exception, seems a clear counterexample.

I’d also argue that if your code is mathematical in nature, it is both quicker and more reliable to write, review, and maintain concise, mathematically natural syntax like a+b*c than verbose alternatives like a.add(b.multiply(c)).


Considered by people who have used C++ and seen some awful problems creep in because of unnecessary operator overloading.

Your last point actually highlights the problem - it is not more reliable because you are assuming what "+" and "*" do.

EDIT: An old blog post about the issue: http://cafe.elharo.com/programming/operator-overloading-cons...


>> natural syntax like a+bc than verbose alternatives like a.add(b.multiply(c)).

> it is not more reliable because you are assuming what "+" and "" do.

You also assume what "add" and "multiply" do.


Yes, but I think it is far worse in the case of built-ins; it is explicitly obvious that "add" is not part of the language, which is not the case with "+". Admittedly this is much more of a problem for new programmers than experts.

To be fair, I think a lot my prejudice against overriding operators is summed up by Bruce Eckel (who explains why it is arguably misguided in the case of Java):

"[Java designers] thought operator overloading was too hard for programmers to use properly. Which is basically true in C++, because C++ has both stack allocation and heap allocation and you must overload your operators to handle all situations and not cause memory leaks. Difficult indeed. Java, however, has a single storage allocation mechanism and a garbage collector, which makes operator overloading trivial -- as was shown in C# (but had already been shown in Python, which predated Java). But for many years, the partly line from the Java team was "Operator overloading is too complicated."

http://www.artima.com/weblogs/viewpost.jsp?thread=252441


Your last point actually highlights the problem - it is not more reliable because you are assuming what "+" and "" do.*

How is that any different to assuming what add() and multiply() do?

In many ways, I’d say the latter is worse, because it’s not immediately obvious from just the function names whether the operation mutates the object it’s called on or returns a new object with the result while the originals remain unchanged. Both behaviours can be useful. Both are widely used in libraries, with those exact function names. You just have to read the documentation and learn how each library you use works, which is particularly... entertaining... if your project uses multiple libraries and they don’t all follow the same convention.

For + and * operators, someone already decided what the convention should be. If you just make them work like the built-in versions, the semantics are already intuitive to anyone who studied basic arithmetic at school.

Edit: I did take a look at the article you cited. It appears to be an elaborate troll, actually consisting of little more than a set of unlikely examples (I wouldn’t write a function called divide() to “divide” two matrices, so why would I suddenly feel the urge to write a / operator?), a set of assertions without proof (some of which are just begging the question), and a final ad hominem that conveniently discards one of the most obvious demonstrations that operator overloading can be useful on the basis that anyone making that argument just doesn’t understand (and ignores counterexamples like OCaml, where basic arithmetic operators really are different for basic numerical types, which is a significant pain point in the language).


> For + and * operators, someone already decided what the convention should be. If you just make them work like the built-in versions, the semantics are already intuitive to anyone who studied basic arithmetic at school.

I think it's exactly this that is misleading - your intuition may not be the same as developers. I think people assume more from "+" than they would from "plus()" and if you don't think about it you may falsely assume you are using a language built-in.

I'm not saying that operator overloading can't be useful, but in C++ developers did create horrible code by using /, *, + in their classes to 1) show off that they could and 2) to save typing. If they had to name their methods, I hope there is a good chance they would have done better than divide(), multiply() etc.

Regarding the article, I agree it is somewhat guilty of hyperbole, but I don't think calling it a troll is remotely fair.

BTW I don't understand what you mean by "can be useful on the basis that anyone making that argument just doesn’t understand", which makes me feel like I've wandered into a double joke ;)


Fair enough, perhaps calling the article a troll was a little harsh, as I don’t know how the author intended to come across. However, the article did seem to consist of one weak argument after another: an unlikely strawman problem here, a blatant exaggeration there, a quick logical fallacy to tie them all together. I’m afraid I didn’t find it a convincing case at all.

BTW, I meant my final comment to be parsed as (conveniently discards one of the most obvious demonstrations that operator overloading can be useful) (on the basis that anyone making that argument just doesn’t understand). The author didn’t actually explain why that case was “completely different” or what someone who proposed that example didn’t understand so that “their opinions can be discounted”.

In fact, I would suggest that his entire case about built-in operators being unambiguous even though they support multiple types can be annihilated using only the equality operator and the words “floating point”: in C, you still can’t tell whether the expression a==b is reasonable or almost certainly a bug without knowing the types of a and b, which AFAICS is exactly analogous to the problem the author is so keen to attack with operator overloading.


     In clean object-oriented code, the internal state of a class is not exposed to the outside.
One could argue clean OO code also follows the uniform access principle. I guess everything being a method is uniform. In which case why are public fields allowed?

The verboseness of writing getters and setters doesn't realistically stop people from writing them. It doesn't educate them as to why writing lots of them might be a bad idea. It doesn't suggest a better design. It just makes code harder to read and people put up with it, occasionally grumbling to themselves.

It's this kind of philosophy that's prevented java from having a terse lambda syntax for so many years, which in turn has left it out in the cold as API tastes have changed.


> In clean object-oriented code, the internal state of a class is not exposed to the outside.

So then you don't write getters and setters for that stuff in C#. I don't get your point here.


The point is that a language should not add features that make it easier to write bad code.


The point of getters and setters is to avoid exposing internal state to the outside. In this way you're not getting direct modification of fields, you're sending a signal: "hey Date object, get me your representation as an integer number of seconds since epoch", the outside doesn't care whether the Date internally uses the seconds since epoch representation, or whether it translates to this representation just for this method.


Getters / Setters don't automatically mean that the internal state of an object is exposed. For example, an object may internally have mutable linked-list of something, but a Getter that returns an array (immutable) of immutable members.

What is the alternative to Getters / Setters (or, properties, which is basically just syntactic sugar) in an OO language?


That's being way too dogmatic.

The problem with taking encapsulation to an extreme is that you are trading off against flexibility. The more you hide your state the less any external party can implement their own logic using that state, and the more separate concerns have to be bundled into the class hiding the state because they can't be dealt with anywhere else. Everything in software design is a tradeoff.


I wonder what those dogmatic programmers do when they are confronted with a problem in the real world where they need to find a balance between "encapsulation" and "single responsibility".

I guess their brains simply deadlock.


> Getters and setters are bad design. In clean object-oriented code, the internal state of a class is not exposed to the outside. Thus, getters and setters should not be encouraged.

How can this be a general statement? Some state can be internal, some can be public. Java supports both. Or how do you suggest that state should be accessed? Because I don't understand how a system can work if no state is shared between objects.


At least one paradigm, is that you ask an object to do something for you with its internal state and return the result, instead of getting its internal state and doing the operation yourself. And instead of setting state directly, you have methods that let an object know something happened, with data optionally associated with the event. This paradigm really decouples objects, which is good, but is fairly time-consuming to pull off, and doesn't necessarily lend itself well to being extensible. I generally utilize this paradigm for mission-critical business logic. If I'm doing something UI-related on the other hand, my classes are festooned with getters and setters just because I need to iterate fast and get it done.


> Thus, getters and setters should not be encouraged.

Damn, so why are they encouraged in Java? 11 out of 10 frameworks insist that you write getters/setters so properties can be accessed from outside.

Java designers need to get more practical. If getters/setters are unavoidable, I'd better have them without so much boilerplate in my code.

No need to go the C# way. Just enough to avoid repeating the name and type 3+ times each would be great.


And the two things I notice Java has that C# does not (static methods on Interfaces, and functional interfaces) have completely viable alternatives in C# (extension methods, and Action/Func, respectively). This really smacks of catch-up.


Speaking of C#, it's missing one critical for me feature: it does not run on the platforms I am (or my employer is) using. There are some attempts like Mono, but in practical sense it does not. That is sort of a no go right there.


You can tweak JVM but you can't tweak CLR. While it's not Java vs C# the language but the VM/Runtime and the language is almost tight together (Mono is an exception).


What I'm missing the most from C# is actually the "var" keyword.


The checked exception concern leaves me scratching my head. Without checked exceptions, the forEach implementation has no way to gaurentee that the lamba won't throw, and has to be exception-safe at all times -- in the case of stateful code, this can generate a lot of boilerplate.

Rather than write all that boilerplate, most people won't actually write exception-safe code across the board, which allows strange state-dependent failures to creep into the system.

The unexpectedly-unwound-stack problems that Java's compiler is highlighting are still there, just hidden.

Better, in my mind, to not use unchecked exceptions at all.


It's a pain that operations can fail, but handling state when they do fail is a problem that exists whether your exceptions are checked or unchecked. Forcing all exceptions to be checked doesn't seem to help with the problem, but would rather cause a lot of code to simply declare that it throws Exception, rather than listing off all possible exception types that it doesn't handle (e.g. out of memory, SIGTERM received, SIGINT received, etc)


Failure types are generally classifiable; part of clean module design is in classifying your exceptions in a way that allows you to declare that a more or less granular exception types are thrown based on the actual failure mode(s) possible from your code.

The 'throws Exception' behavior only emerges when people don't take the time to define how code can fail, which is something that is necessary to fully define an interface, regardless of whether or not you have language support for declaring failure modes.


Given that most code degrades into either wrapping checked exceptions into runtime exceptions or most methods having "throws Exception" declared...I think it's safe to say Checked Exceptions don't work well in the real world. There is a reason no other language created in computer history uses them.


> Given that most code degrades into either wrapping checked exceptions into runtime exceptions ...

This is a self-referential argument; people who are disinclined to take the time to define how code fails are disinclined to define how code fails.

The code can still fail, the failure cases still must be defined, but now the compiler won't warn them that they've failed to do so, and the result is that the failure cases are undefined and unknown, and the code is less reliable.

Without defining failure modes, the system is always in an unknown state: any undefined failure will leave any any stateful code in an unknown state. Since failure modes must be defined, either in code, or in documentation, and writing documentation takes longer, and it must be maintained (and read) manually.

Since defining failure modes manually takes more effort than using the compiler's support for checked exceptions, the reality is that unchecked exception advocates are really arguing for not defining failure modes at all; this is simply laziness.

> There is a reason no other language created in computer history uses them.

Many languages simply don't use exceptions at all, since having what amounts to an 'atomic goto' isn't necessarily considered an advantageous feature. Not using exceptions is the only choice that doesn't introduce a significant likelihood of program failure due to littering code bases with what amount to unchecked stack unwinding gotos.

That said, your statement is not really correct. Languages like Haskell provide checked exceptions via library features. C++ has an optional checked exception mechanism, but they aren't enforced at compile-time (unfortunately).


Forcing all exceptions to be checked doesn't seem to help with the problem

I think this is a common misperception of Java - not all exceptions needed to be checked exceptions. The author makes a decision when defining the Exception class if it is going to be a checked exception or unchecked exception.


You might like project lombok re: getters and setters. It has a feature that implements this using annotations. True, this is an additional external dependency for a project when it'd be nice if it was a core language feature, but as external deps go it isn't a killer one -- the library is not huge and doesn't spiderweb out to a bunch of other deps.

http://projectlombok.org/features/GetterSetter.html

(There's also a lombok-pg project that offers more extensions.)


We actually evaluated Lombok for this reason (and others), and eventually decided against continuing with it for a few reasons:

* Every team member must install the eclipse plugin (which, if you already have any eclipse vm args set, also requires you to add a couple of extra arguments).

* No reasonable way to add javadoc comments to getters and setters without explicitly writing those getters and setters (kind of defeating the purpose of lombok).

* Sonar (which we use for static code analysis) will now report those private fields as a major warning as it thinks they are unused private fields. You can make those warning go away by adding //NOSONAR on each line, but yeah..

* Relies on internal java APIs. While the authors seem to be quite good and stay on top of things, I fear that this is still inviting issues down the road.

We felt that the cons outweighed the pros in this case, despite some of the beauty in lombok.

Disclaimer: I am a colleague of the author of the linked article.


Give the desire for lambdas to enable more simple concurrent programming, and the added complexity mutating the external variables in the implementation, I think they made the right choice.

If you _really_ have to mutate those variables then put them into a final array, and alter the entries in that, the same can be used if you need to do an effective pass by reference.


The Atomic* classes are also good choices.


I like my checked exceptions. I like knowing what might go wrong with a call and I like being forced by compiler to handle it.

True that in typical chaos that reigns in code written by a disjointed group of people it ends up being Exceptions and RuntimeExceptions all over the place anyway. But why should disciplined people suffer?

Please leave my checked exceptions alone. I like then very much. I have not dealt with C#, but did with C++ and I prefer the Java way.

The arguments against checked exceptions that I have seen go along the lines of leaking data types from unrelated code layers. That's probably valid, but I have not experienced it. The rest seems like just whining about being forced to write code to handle errors, instead of letting things just happen.


To me, checked exceptions seem to be a second return value: you need to catch it or declare that you are going to pass it to the caller. In this sense, it is enforced by the compiler even more rigidly than the return values themselves. They can be useful.

OTOH, sometimes we need a "panic"(¹) function, and a main loop that catches everything and either communicates the error to the user, or restarts the whole thing. That's why unchecked exceptions can be useful too, especially since they makes it harder to ignore errors - who has the patience required to check every return value of printf in C?

(¹) No, I've never programmed in Go.


About events. One could argue that the example code is really made to look bad in the Java example.

Sure, it's easier in C#, but there's no need to invent problems that doesn't have to be there.


>- Getters and setters, C# style.

In Java, I use Lombok:

@Getter @Setter private Integer foobar;

>Checked exceptions

True, widely regarded as a mistake, but hard to get rid of now.


Woah, that streaming abstraction looks really slick, especially given that it can be automatically parallelized.

The idea of doing a one-line parallel map-reduce that is

    collection.stream().parallel().map(operation1).reduce(operation2); 
is surprisingly cool imo. And the availability of streams across the collections framework allows that they can be used in routine programming.


All I ever heard about with Java 8 was lambdas (which is understandably the big name new feature), but streams, in my opinion, are just as exciting. Unlike lambdas and they're new syntax, streams seem like something any developer can quickly pick up and start taking advantage of.


I think it gets even easier than that since stream/parallel are the basic calls:

  collection.parallel().map(operation1).reduce(operation2);


Yep, Java 8 starts to look a lot like Scala


It looks like C# 3 (2007)


Many C# features look a lot like Scala features (2003/4)


Massive issue here for many when it comes to Java 8 - what about Android ?

Android's dalvik is based off of Apache Harmony, which doesn't appear to be updating anymore. Does this mean no Java 8 for Android/Dalvik? Seems like it would be a major issue if so.


Google is yet to even add support for the Java 7 language constructs.


The lack of any public direction on this from Google to Android developers has been annoying me for a while now. It seems like Android has decided to be stuck on Java 6 forever.

I'd love to see them really shake things up in Android by, eg, going with NaCL/PNaCL (or even something like seccomp2, don't really care) to manage sandboxing and exposing an API framework (so apps can present common UI widgets) with a C API (regardless of what it is written in underneath) making it relatively easy to consume from any language. Write your Android app in Go, Python, Java, whatever you want, as long as it has API bindings! But I'm not holding my breath on that.


As a daytime Java developer who dabbles in Scala on the side, these changes look great. The inclusion of a Joda Time style time library is long overdue as well.

I just wonder how many years before there's adoption from the general Java community.


As a daytime Scala developer who seldom uses Java anymore these changes are nice, but a far cry from what I get in Scala.


The Scala compiler takes so long in comparison to javac though


Javac also takes longer than Turbo Pascal to compile...


Java 8 looks good enough to make Java a reasonable candidate for Java.Next. Especially given that there doesn't appear to be a clear winner between the alternatives (Scala, Groovy, Clojure, Kotlin etc.).

Java 8 seems to have borrowed a lot from Guava (Optional, Function, Predicate, Supplier etc.).

Extension methods as well as default methods on interfaces would have been nice.


Mixins would be great too.


Default methods in interfaces makes them mixins.


Or something similar to Clojure protocols.


These changes are the first since Java5 to make a real difference to how I will program, and thus are pretty nice to see.

The sad thing is that a large part of my Java programming will involve Android, and thus I will continue living in Java5/6 land, as will numerous libraries and large parts of the Java ecosystem.

I wonder when Google will address the future of Java on Android in a substantial way? Eventually it will not be tenable to stay on a 10 year old version of a language. I hope they will seriously consider either updating their supported syntax to be compatible with Java8 or introducing some other language into the Android SDK (Go go!).


I remember some years ago when I was glad Java 5 had got some really practical new features. I didn't program in Java back then, but I wanted the language to evolve (maybe I was going to need it in the future).

Now I do program in Java 5/6 and it would be really sad if the language was still stuck in the 1.4 days.


The lambda and stream additions are great and long overdue, but I've pretty much adapted to using Guava to fill that gap.

So right now the thing I'm most interested in is actually the promise of better type inference. Anyone who has used a rich, type safe DSL like Hamcrest or Guava knows the pain of having to give so many type hints to the compiler, when the context provides all that the compiler should need.


The inclusion of lambdas is great, but not supporting full closures severely hampers their usefulness. Instead of using lambdas to use patterns like CPS (continuation passing style) or alternative object interfaces, lambdas just save you from typing extra characters.

It is always fun to watch other languages continue to implement features that bring them closer to lisp. I wonder how much longer it will be until every language is just a lisp dialect.

They are moving in the right direction but very slowly. It is impressive though that they have been able to still innovate without breaking backwards compatibility.


> It is impressive though that they have been able to still innovate without breaking backwards compatibility.

Indeed. The .NET IL compiler actually supports closures by generating a class with the lambda's method body as method on that class. That method takes in as parameters whatever outside variables need to be captured.

It also supports iterator continuations (e.g. "yield return") by generating an entire class which inherits off of IEnumerable and wraps your single function with all the necessary trappings to track the continuation state.

You can see this stuff by looking at C# assemblies in a free program called ILSpy[0]. Normally it'll reverse-engineer these compiler patterns, but if you uncheck all the "decompile" checkboxes in the options, it'll just straight-up translate the IL to C# and you can see the dirty tricks.

[0] http://ilspy.net/


C# is quite impressive, especially in comparison to Java. If it had been released earlier, wasn't owned solely by Microsoft, and supported all major platforms equally, it could have been huge, even larger than Java. If C# had reversed roles with Java a significant portion of the world would have been more productive.


C# 1.0 was very close to being an exact copy of Java. To say that if C# had been released before Java it would have been more popular is nonsensical as it started life as copy-cat Java. Without Java there wouldn't be a C#. Later versions of C# added more features much faster than Java. Many Java developers have since moved on to Scala and other JVM languages which are more expressive than C#.


I'm not sure what you're getting at with CPS. How do Java's lambda's inhibit CPS? And what do you hope to accomplish with CPS, anyway? I could see you wanting tail-call optimization in order to make CPS useful, but that's a VM limitation, not a language limitation, and you could always trampoline anyway..


Can someone please show the java guys the <Number>.TryParse(String s) from C#?

I'm convinced everytime I have to manually write the try catch clause for parse a kitten dies somewhere!


TryParse uses an out parameter. Java doesn't have that. AFAIK, the JVM has no way to represent pointers in any such fashion.

It's sort of ironic that the JVM, with it's limited Java-only bytecode has attracted so many languages, while the CLR, which is designed to handle multiple languages in an efficient manner, has relatively few.


I started to reply that Microsoft could fix that by providing high-performance CLR implementations on all the platforms the JVM runs on.

But it occurred to me that even if they did so, a lot of people wouldn't trust them to continue to support the other platforms indefinitely. The only way around that would be for them to spin off the entire .NET division.

Microsoft being Microsoft, of course, they would never do any of this.


To me it's a huge deal how portable my software is. CLR is very much tied to Windows. I know that there are other implementations but they always lag behind.


As a Java programmer, i agree that the catch blocks around parsing are annoying, but i'm not sure a method that returns an error code is any better. Wasn't that tried back in the '80s?

The way Scala (and probably other functional languages, with which i am not familiar) handle this is with a little bit of polymorphism. Using Java syntax, parsing a string into integer would return a Validation<Integer, Exception>, an abstract type which could be either a Failure<Exception> or Success<Integer>. This then exposes methods like ifSuccess(Consumer<Integer>) (it's called something less obvious in Scala, because that's how Scala works) and ifFailure(Consumer<Exception>), and various other useful things.

This seems a bit weird at first glance, but it makes it quite easy to deal specifically with either success or failure, or both, or to defer dealing with them until later (you can put a load of Validation objects in a set and worry about whether they're successful or failed later on). It also makes it impossible to ignore failure - there is no error code to forget to check, and no unchecked exception to forget to write a catch block for.


TryParse returns a boolean. So you use it like this:

  int res;
  if int.TryParse(s, out res) { // OK } { else // not ok }
You certainly do not need an exception to deal with the simple case of "did this string parse into an int".

Edit: A great alternative signature is to use Maybe/Option, so you get Some int or None.

  match int.TryParse s with
  | None   -> ...
  | Some i -> ...


Ah apparently I remembered it wrong. I thought it was

int res = int.TryParse(s, 0); // 0 as the default if parsing fails

Of course if there is no reasonable default you should bubble the thing up anyway or handle it on the spot. But very often for input parsing there is sane default you can choose.


is that really more work than:

  int res;
  try { res = Integer.parseInt( s ); } // if part
  catch ( NumberFormatException nfe ) { } // else part
The words are different (try/catch instead of if/else), but still 2 blocks of code with similar syntax...


The main reason for TryParse, AFAIK, is avoiding the massive expense of exceptions on the CLR.

Another reason is composability. If you're checking several values together, returning a Maybe/Option is much more clear and usable.


Well, you could substitute that for

     if(Int.CanParse(s))
     {
         i = Int.Parse(s);
         // OK things.
     }
     else
     {
        // Not ok.
     }
but it also be weird in Java because Parse throws and exception, so you'd have to either write an empty catch block or declare the method with a throw clause, event though it actually doesn't throw anything.


That either is inefficient (you end up parsing each number twice) or requires the compiler to have deep knowledge of the standard library.

Similarly, C# has TryGet where in Java, you need a containsKey/get combo, or have to assume your collections do not store nulls.

I do think C# could do better, though. Neither C# nor Java have the equivalent of "insert ... on duplicate key ..." for collections. You now have to do:

   items.TryGet( key, out value);
   items[key] = value + 1;
In C++, that would just be:

   items[key] += 1;


You could have a method returning a nullable Integer.


Haskell works the same way.


Since Java doesn't have pass-by-ref, this would probably be just as awkward as try-catch.


Guava's Hints.tryParse is about as good as it can get in Java... Maybe it could return an Optional instead of a nullable boxed type, but the calling code wouldn't be much different.


This looks very thorough. I appreciate the detailed coverage of lambdas as well-- this is already helping my understanding.


I recently started developing in Java and the thing I find most irritating is the lack of operators overload.


Why? Sure, overloading a "+" operator can be elegant looking. But the aesthetic gain vs. maintainability doesn't seem to be worth the tradeoff 99% of the time. I certainly wouldn't want to be the one who has to debug the thing because someone accidentally side effected one of the "operands".


Nonsense. To paraphrase myself from a similar discussion on reddit:

"List.add(otherList) <-- what does this do? Why, compared to operators, would a function name be any better than an operator? Mathematics are part of computer science, and denying basic operators for a reason like "because someone might misuse them" is like denying functions because someone may implement a sum() and call it lcm() (and then we're back to goto land). You're confusing a programmer's error (bad choice of a name/operator function) with a language feature. Just because people can write bad code it does not mean we should disallow them to write great code."

Honestly, it's about time the Java crowd stops with the mantra and starts thinking from themselves. Or at least, learn why the reason you hate operator overloading is fallacious.


That is what I keep repeating when people complain about operator overloading.

It is just symbolic names, like doing abstract mathematics with letters instead of numbers.

Except for C, Java and Go, all the remaining mainstream languages allow for symbolic names in the functions/methods.

I never understood what was the big deal.


It just seems an odd hill to die on. Just a minor little feature that most people would use only occasionally in their objects.


> Just a minor little feature that most people would use only occasionally in their objects.

Yeah until you have to work with java's Bignums one day, then you start wanting to choke people to death.


Yeah, I guess it sucks if you work with Bignums. I would be happy if BigInteger and BigDecimal were promoted String-level language support: still implemented as classes, but with compiler support. They could add indexing [] for collections, but I can live with .get() and .put().

Some people seem to think that if they don't use, then it doesn't matter. In many business applications, you don't even need floating point (you just transfer data to the database and back), anyway.

Are floats and doubles "little features"?

In computer graphics, you need lots of floating point calculations. Are ints and longs "little features"?

In embedded software, you usually don't want to use resizable strings (C doesn't even have this feature built-in), because reallocations are expensive or unavailable. Are resizable strings (or containers) "little features"?

Different programs have different needs. If you haven't stumbled upon a good use case, doesn't mean that it's useless.


This same logic can be used to defend any "little feature". Language design is all about where you draw those lines.


That logic is just a starting point (just to avoid the "I don't need it, why can't you be just like me?"), not the whole thing.

Then you have to take into account a few other criteria, such as [1] internal coherence in the language, [2] difficulty in implementation (and explaining to others) [3] what your target audience is, and what they want, [4] what competitors are doing.

I think full operator overloading would hurt [2], but special-casing Bignums would not hurt [1], [2] at all (as I said, JSP EL already have +,-,*,/, etc. , String already have +, +=).

[3] This is what this thread is all about, no need to repeat :-D

And [4]... Well in this case Java is really playing catch up.


I disagree wholeheartedly: the verbosity caused by a lack of such an 'irrelevant' feature makes my eyes bleed.

It doesn't really matter, some people see classes/objects as a minor feature too. I agree with him: operator overloading is not a minor feature and it's probably one of the reasons I am not very fond of Java.


It just seems an odd hill to die on.

There's an axiom to be found somewhere there. If it's a hill, someone will be willing to die on it.


Absolutely agree with you. I haven't seen yet a real reason to avoid including operator overloading in any language.


There is exactly one such reason: programmers. Python programmers were given operator overloading, and did a good job with it. C++ and Scala programmers were given operator overloading, and did a bad job with it (eg << in C++'s streams, ^ in Scala's specs2).

I have never heard a convincing theory of why some language communities were careful in their use of operators, whilst others went overboard. In the absence of such a theory, it's a risky feature to add.


Well, it's time to disallow functions, since someone can misuse their names. And exceptions, since someone might use it as goto. Might as well get rid of non-primitive types, there might be a very bad person willing to name a class with a completely meaningless name.


the nice part about java is the legibility. I can simply start with a new instance of your object and auto complete to the method that I want. Once operator overloading joins the fray everyone is off to the races for crappy dsl of the week. "here's how you do the XXX lib way".

I'd prefer if that option stays off the table. Given that java users are on ides, autocompletion makes an api call a one character effort anyway and saves us from tons of stupid operators.


Java users are stuck with IDEs largely because it's so hard to design a decent DSL when you're confined to Java method syntax. Autocomplete doesn't make it any easier to read bulky verbose code, just gives you more of it.


It seems to me that it's a cultural problem more than anything. People know that they shouldn't take an existing, well-known method name and repurpose it for something completely different. But somehow a lot of people think it's OK to repurpose an operator to mean something completely different just because it's convenient. C++ being the classic example. << means bitshift. Just because it looks like arrows doesn't mean it's sane to repurpose it into a completely different "output this" operator. You wouldn't override "leftShift" to output stuff, so why do people do it with <<?

I think it depends a lot on the languages. Operator overloading is easy and common in Python, but in my (limited) experience, it's used to create new classes which respond to existing operators with existing semantics, e.g. to make new number classes. For this, it's great. There's no particular reason the types built-in to the language should get special treatment for operators, as long as programmers can resist the temptation to be idiots.


>Operator overloading is easy and common in Python, but in my (limited) experience, it's used to create new classes which respond to existing operators with existing semantics, e.g. to make new number classes.

In Python, it's called "special method attributes" and they can do a lot more than just create classes. The __eq__ attribute returns a straight boolean for example in response to the "==" operator. And yes, by the way, you actually can accidentally alter attributes in the calling and callee objects if you aren't careful.


Sorry if I was unclear, but I meant that this facility lets you create classes whose instances work like built-in numbers, not that the facility itself is used to create clases somehow.


"But somehow a lot of people think it's OK to repurpose an operator to mean something completely different just because it's convenient. C++ being the classic example."

Do you have any data to prove such assertions? A C++ STL operator chosen more than a decade ago is not enough statistical data for one to have such a narrow-minded vision and say these bold statements. Maybe you need to check all the nice C++ libraries that make use of operator overloading.


You seem to have confused the phrase "classic example" with "the worst thing out there, with data to prove it".

It's an example. Do you dispute that? It's classic. That's subjective, and I claim it by raw assertion. That's it.


There's basic language bigotry going on here: an aged internet meme declares that C++ operator overloading is bad because you can indeed, in theory, do idiotic counter-intuitive things with it. Yet the identical capability in Ruby, Scala and Haskell (inter alia) is deemed A Good Thing by the consensus.


For me, and perhaps others, this is because C++ enshrines idiotic counter-intuitive operator overloading in its standard library.


Examples ?


The use of << and >> for streams is the one that always comes to mind. So minor, and yet so awful.


Well this is clearly a matter of taste; the visual implication of movement (http://images.google.com/images?q=diversion+sign+chevrons) is quite elegant IMHO, and the precedence seems natural also. But others' mileage obviously varies ...


I disagree that the precedence is natural. Without looking it up, and without compiling it, what do the following (IMO quite reasonable in general) lines of code do?

    cout << x & y;
    cout << (x & mask) != 0;
    cout << boolVar ? "Yes" : "No";
As for the visual implication of movement, I have no objection whatsoever to using the << operator for stream operations in general. However, I strenuously object to doing this in a language which already defines them as bitshift operators. Repurposing an operator to do something completely different (and especially in this case, where a functional operator suddenly turns into one that's almost entirely about causing a side effect) is bad. Operator overloading can be useful and result in great, comprehensible code, but only if operators only mean the same thing everywhere.


Point taken, but small beer IMO; I use parens in boolean expressions because I can never remember the precedence of && vs ||, so would naturally use them here too.

And I don't buy the fact that << is somehow off limits because it's used for bitshift; the domain of usage is usually disjoint, although one would have to squint to grok the outputting of a bitshifted value to a stream - but how frequent is that?

Yes the << notation isn't perfect, but it is much nicer than named functions (and certainly superior to < (which Stroustrup initially considered)!). Adding arbitrary operators to the language would have required mechanisms to define associativity and precedence, which would have added a weight of complexity disproportionate to the reward.


I don't really see the big advantage to << over named functions here. Compare:

    cout << string << " and " << otherstring << endl;

    cout.write(string).write(" and ").write(otherstring).write(endl);
It's about as readable to me. To me, operator overloading is good when the meaning is already obvious because of shared context that everybody has. That's not the case when co-opting << for stream operators.


I think it's an interesting phenomenon in the sociology of programming languages.

A powerful new feature is introduced in a niche language, and novices abuse it. The feature gets a bad reputation. Yet eventually, as new languages adopt it, the community learns how to use it and how not to use it, and people become accustomed to it, and come to agree that it's not such a bad thing.

I'd say that's happened, to some degree, with garbage collection and lambda expressions. We seem to be in the middle of the process with operator overloading, and still in the early stages with Lisp-style macros (I can hope, anyway :-).


It's not an identical capability. Scala doesn't technically have operator overloading, it has symbolic method names and operator syntax for methods. This means you can use any valid name as an operator.

With C++, you can only overload the built-in symbolic operators. This means instead of choosing the best symbolic name to use as an operator you're forced to re-purpose some built-in operator like "<<". In Scala you can pick any name or symbol you want.


Yes I know that, but the point is that the "operator overloading is terrible" complaint typically cites either plainly daft straw men like "what if someone overrides + to mean subtraction", or else hones in on << for streams, which as others have pointed out here is actually perfectly readable and sensible. I have yet to see a single sensible criticism that holds water (OK, MS overloading address-of & on a smart pointer, that one was stupid ..)

Conversely, there was an XML library for Scala posted on Reddit about a year ago that used things like <:<, <<? and <> as method names; clearly the freedom to concoct your own symbolic names is far more open to abuse than the oft-maligned C++ mechanism.


Two big problems with overloading << for streams are operator precedence (the precedence of << makes little sense for a stream operator) and the fact that the chained syntax can make the whole thing ambiguous (which is kind of a subset of the precedence problem), e.g.: cout << x << 3, does that shift x by 3, or output x and 3 separately?


Same with Haskell, which makes it compact but not always very readable (it effectively turns many libraries into their own non-intuitive DSL).


You can't overload operators in Haskell. You can define new operators though. You're not going to change the meaning of + unless you remove the old.


The common example people pull off the hat with C++ is the use of << operator for streams and some UI toolkits.

Which I personally never had a problem with.


Without operator overloading you cannot write an EDSL.


In my opinion good uses of operator overloading are ones that preserve the mathematical properties of the operator, so that the operator is in some sense "the same operation", just for different types. By this standard "+" for string concatenation isn't a great choice since string concatenation isn't commutative, but addition is. The same can be said for using "*" for matrix multiplication. If you define these kinds of operator overloads, you have to be very conservative about writing generic code that uses the operator.

Given these limitations, and the fact that most languages have relatively few built-in operators to choose from, I find that just operator overloading isn't enough. You really want the ability to define new operators as well. That's something Java (and C#, and C++, and many other languages) would really benefit from.


Given these limitations, and the fact that most languages have relatively few built-in operators to choose from, I find that just operator overloading isn't enough. You really want the ability to define new operators as well.

This is getting into blessing vs. curse territory, though. On the one hand, used well, a custom operator might enhance the benefits of allowing operator overloading in the first place. If your code is mathematical, you have an extended character set available, and you have good coding standards and keyboarding skills to match, then matching the operators/notation in the code to any related mathematical documentation has obvious advantages.

On the other hand, there are only so many obvious symbols available on most keyboards. Once those run out, you can resort to those extended characters, but they might not always be so unambiguous: most of us would recognise less-than-or-equal, but was that an empty set or just a slashed zero in the font Bill was using? The alternative is to start combining symbols, which again might make sense if say [[ ... ]] looks like whatever mathematical notation is standard in your field, but isn’t so great if you go all Haskell and define more-or-less every combination of three non-alphanumerical symbols in the universe to have some subtly different meaning... including from the same combination of symbols in the other library you’re using five lines later.


Actually I think Haskell does things pretty well on this one. People often complain about the operator-heavy nature of Haskell, but I find that after a while you're familiar with all of the popular operators and they just fade into the background, actually making the code feel much more clean than it would without them. They contribute greatly to the feeling of concise power that Haskell has.


People often complain about the operator-heavy nature of Haskell, but I find that after a while you're familiar with all of the popular operators and they just fade into the background

With the everyday operators that everyone programming Haskell will learn very early, I agree it’s not a significant problem. Familiarity will surely overcome any lack of intuitive meaning very quickly. The same applies if you’re using “made up” operators but they reflect some intuitive syntax or conventional notation in whatever field you’re working in. In all of these cases, having a concise notation for frequently occurring concepts is surely a benefit, other things being equal.

However, as you start to represent more specialised or esoteric concepts, I wonder whether there are diminishing returns and increasing risks. For example, I think it is good general advice that identifiers in code should be pronounceable, but how do you pronounce $$!? For that matter, did I just write a $$! operator and a question mark, or a $$ operator and an informal confusion/sarcasm mark?

Here’s a Stack Overflow discussion I thought was relevant:

http://stackoverflow.com/questions/7746894/are-there-pronoun...


Quite a few languages allow the use of operators, or special characters if you prefer, as names.

Without any restriction, except for a few basic ones.


You mean like this? (not Java):

  multi infix:<plus>(Int $a, Int $b) {
      return $a + $b;
  }
  multi infix:<plus>(Str $a, Str $b) {
      return $a ~ $b;
  }
  
  say 10 plus 4;
  # 14
  
  say "foo" plus "bar";
  # foobar


what's that? Perl 6?


Yep. Much of the language is implemented in itself (at least in Rakudo), using such structures:

https://github.com/rakudo/rakudo/blob/nom/src/core/Int.pm#L1...

NQP in that link is "Not Quite Perl", a subset of the language which is the what actually needs to be implemented to support Perl 6. This is what's almost done being ported to Java. Again, for Rakudo, which is one implementation.


Streams, lambdas and the new time API are wonderful additions to Java.


Java 7 failed to excite me. Java 8 however, has many features that I can use today. Lot's of stuff from guava, cleaner anonymous classes (lamdas), trait-like interfaces, FluentIterable++ (streams)... it's a good time to be a java developer.


Unless I am mistaking, Java 8 fails to address the major pain-point I have with Java: hash and array literals.

For me, one of the big wins of both Clojure and Scala is being able to handle maps, sequences, etc. conveniently.


Really? new double[] { 1.0, 2.0 }? Arrays.asList("foo", "bar")? Guava's Map builders?

Lambdas are what make operations on data structures convenient in Scala and Clojure, not saving a few characters on the handful of literals in your code.


java's main principle: preserving backwards compatibility. this is why the iterations are so small and take so long. java tip: don't look at other languages or you'll get depressed at how easy things could be or even should be


Don't forget Groovy


I am not sure I am super excited about Java the language increasing in size. As in, the size of the body of knowledge that need to be committed to one's head to comprehend and write the code.

I am actually quite opposed to the syntax sugar features like "getters/setters from C#", "events" and what not. Java the language is already large enough. IDEs do a decent job dealing with boilerplate code. Please, no new kludgy additions to the language.

Those people that need functional features could use Scala now. Jython is there for those not liking static typing.

Why don't we just leave Java alone?


The only thing I wish made it in for 8 was coroutines(MLVM project).


I'm definitely pushing for this to be added ASAP — probably won't be before 10 though. In the meantime, it would be awesome if they could get it into OpenJDK behind a flag...


There's no real work being done on this at the moment (I was involved with trying to kick start a small team with the original patch authors). We've started discussing it again, but it's looking like a case of "Real Life" getting in the way :-|. Will see if I can get the broader Adopt OpenJDK programme to take it on board.


Incomplete lambda forms? Facepalm.


Incomplete how exactly?


I find Nashorn particularly interesting, as well as Node.jar: http://insin-notes.readthedocs.org/en/latest/JavaOne2012/nas...

Hopefully Nashorn will give http://ringojs.org a boost as well.


One small correction/addition: For Stream.anyMatch(), Stream.findFirst() and Stream.findAny() it is mentioned that these are short-circuiting operations. This is however also true for Stream.allMatch() and Stream.noneMatch().


You're right! Thanks, I'll get that fixed up in the next batch of edits.


>Non-final variable capture - If a variable is assigned a new value, it can't be used within a lambda. This code does not compile:

So, it sounds like that mean no closures? That's one of my favorite tools in c#.


Everyone gets confused by this it seems. You can mutate fields in the enclosing class. You can read locals. If you really want to, you can box a local, and then mutate it inside the box. You can't mutate locals directly, because that would make lambdas significantly slower for very little benefit.

(You would not be able to do stack allocation of locals in frames that contain lambdas (unless you could prove that the lambda does not escape).)


Language and api wise these are nice additions, but all I can think about these days is emscripten. The JVM should support the client side out of the box, why oh why didn't Google buy Java?


I'm not sure what you mean exactly, but as mentioned in the article they've built a new JavaScript engine (to replace Rhino) to be included in Java8: http://openjdk.java.net/jeps/174


Just that an entire revolution is happening in the browser itself, I'd really like to be able to code Java directly with something GWT. I read recently that GWT has support for the latest html5 stuff, but complaints about compile time and lack of performance after compilation turn me off...


You should give GWT a go. Compile time is a valid complaint, though it can be mitigated by only producing a single permutation (browser specific output) while in development, rather than the default five-six. Also note that during development you usually don't compile but run in "dev mode", a browser plugin that runs against your actual Java code.

Performance after compilation is certainly not a problem, on the contrary the GWT output is very tight, especially with full obfuscation (aggressive optimizations, inlining etc.).

(I've written a complete HTML5 game engine in GWT: http://www.webworks.dk/enginetest)


Another valid complaint is that dev mode runs quite slow, particularly if you're developing something like a game. I read an article about how they are addressing this so that dev mode runs almost as fast as the compiled result, but can't find the link now. Maybe it's already live? Been half a year since I've used GWT.

But I second the recommendation overall. One thing I learned the hard way though, is that their widget library is pretty constraining, and a very leaky abstraction. I would consistently run up against walls using a particular widget, bang my head against it for a day or two, then scrap everything and roll my own solution. Eventually, I just used GWT to compile the core business logic of my app (which would have been a nightmare to do in JS), and did UI with the standard HTML/CSS/JS flow. Doing UI in Java is too clunky for me no matter what library I'm using though, so YMMV.


> Doing UI in Java is too clunky for me no matter what library I'm using though, so YMMV.

For sure. C#, Java, C++ are all clunky. Maybe it's the nature of the ecosystem.

This is one thing I hate about GWT: it offers tons of upside (sprites, bundles, i18n, unit-test in JUnit IF using MVP) but the downside is in the area of big-consideration of pain point.


With GWT it is possible to pretty much build your UI in HTML and simply inject key components, with event handlers etc., into the DOM structure. You would use an HTMLPanel and add(Widget widget, String id). Few people seem to be aware of this.

I used this approach to build TeamPostgreSQL (demo at http://teampostgresql.herokuapp.com/), where the designer built the entire interface in HTML and I just inject GWT widgets in the right places.


With the ability to add static methods and default implementations to an interface, what is now the practical difference between that and an abstract class?


- A class can only inherit from one abstract class, but can implement multiple interfaces.

- An interface cannot have member variables.


Also, an abstract class can have a constructor (and instance initializers, and initialization expressions for its fields), which an interface can't. That means an abstract class can do work when it is created.

Although it's probably a good idea not to do too much work - a factory method would be better for that.


Did you read the article? Try again, search for "Why abstract classes can't be instantiated using a lambda".


Interfaces with the addition of default methods, in contrast to abstract classes, provide the inheritance of behaviour (but not state) from multiple sources.


multiple inheritance?


Very interesting read. So, nothing new on the Swing side?


Oracle seems to move to JavaFX. JavaFX is still at its infancy but it shows some promises: iOS + Android (soon), cross-platform installer on desktop OSes (Windows, Mac AppStore) that doesn't require JVM (JDK/JRE) to be installed first.

It's ugly in terms of the UI but the infrastructure is there.


Swing is effectively dead. Oracle will no longer invest in the technology, the preferred way forward is definitely JavaFX (2.0+), it shows a lot of promise for those who still need this type of UI tech.


I can't tell if you are serious or not. Swing is dead, from my point of view. In any case, desktop programs isn't really a strength of Java.


Actually that is a case where Java really shines, as it is cross-platform


"Interfaces can now define static methods. "

Twenty years for that one ; )

zomg. Another one debated at length through flamewars on Usenet and ##java. And now it's here. Feeling good : )


The number of times I've implemented something by convention due to the lack of this construct... One common use case is that I like "value objects" -- an AccountId class vs. just a String or an Integer. I usually implement them using some sort of internment internally -- WeakHashmap, Guava Interner, etc. When reading from a wire format and using value objects, one must presume the existence of a fromString(String key), fromInt, etc. method. I'd love to state explicitly that all ValueObjects have particular, generic-ified static constructors. Much cleaner. And, it avoids the cliche Java programmer shame of implementing AbstractValueObjectFactory, etc. Some static state, if isolated and properly managed, makes life easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: