Hacker News new | past | comments | ask | show | jobs | submit login
Charlie Nutter's Response to "Why Not A Bytecode Vm" (groups.google.com)
135 points by johnbender on Dec 23, 2011 | hide | past | favorite | 43 comments



The response is right to correct various errors in the original article. But it also misses the point.

Yes, JRuby is fast in comparison to other Ruby implementations. But this says nothing, because in general "normal" C and C++ code can be converted to run on the JVM with similar speed to the original: The JVM (the standard one) is very good at that sort of thing.

But, there are kinds of code that do not run fast on the JVM. Examples of that are self-modifying code. Ruby implementations happen to not rely heavily on that, so running them on the JVM is fast. However, other dynamic languages that are much faster than Ruby do rely on those techniques, critically so.

There has been a lot of effort to bring those advanced techniques to managed runtimes like the JVM, with the goal of running dynamic languages on them quickly. But overall, native implementations are still far faster. For that reason, Microsoft didn't implement a new JS engine on .NET, it wrote a native one, and for the same reason Google is developing a new Dart VM instead of reusing an existing VM (although, that isn't completely true because Dart also compiles to JavaScript, so it can reuse existing VMs, presumably efficiently because the language design clearly shows signs of being optimized to run fast when compiled to JS).


As much as I respect Nutter's achievements and his jvm chops, his response reads like nitpicking based on his misreading/misunderstanding of what Dart guys wrote and what they meant by it.

The position of Dart guys is simple: by compiling an arbitrary language (Dart in this case) to x86 (or whatever native arch the browser is running on) they can get a much faster code than compiling to an abstract VM, like JVM or .NET or parrot.

It's not a controversial statement: JVM ultimately compiles to x86 too, so whatever optimization tricks JVM does to generate fast code can be used by a compiler that goes directly from source to x86. The opposite, however, is not true: JVM does introduce an additional layer and fixes in stone many things other than just the instruction set that make compilation of some scenarios from jvm to efficient x86 impossible. And you can't fix that without introducing incompatibilities i.e. breaking existing code. One example of this is class format layout: it has known problems, many of which were fixed in Dalvik. Dalvik, not being jvm, can fix them, but jvm can't without fragmenting the platform.

Nutter doesn't exactly contradict the fact that targeting x86 directly results in faster code but all his nitpicking is designed to give that impression.

For example, because Dart paper mentioned in passing that JRuby can't be made as fast as native Ruby, he tears into that because JRuby is actually close to Ruby 1.8 in performance. Which, of course, proves nothing. The best JavaScript implementation on JVM is on par, performance-wise, with the old, pre-V8 JavaScript interpreters. V8, Nitro, and Mozilla's various Monkeys all beat that 10-20x, easy. Nutter latches to the unfortunate example Dart paper gave but completely ignores other examples that do show that targeting x86 is, indeed, 10-20x win for dynamic languages.

When the paper says that jvm lacks features to implement at least some possible language features efficiently (like built-in instructions for tail-calls) and you can't do anything about. Nutter interprets it as some literal "JVM does not let you do what Java cannot do" but instead of addressing the point (lack of certain primitives needed for fast implementation of certain construct) he again use the JRuby-Ruby 1.8 speed parity as somehow showing Dart guys are wrong.

In "A bytecode VM is more than just bytecode" he uses turing-machine argument: both jvm and x86 is turing complete therefore they're just as good. He tops with really weird statement like "Compiling to x86 works best if your feature set fits x86" (if your feature set doesn't fit x86, there it doesn't fit anything).

He also brings invokedynamic as a pro-JVM argument disregarding the fact that it's a recent addition to JVM that breaks backwards compatibility - you can't run Java program that uses invokedynamic on older JVMs because they don't understand it.

I could go on. This post is just nuts.

Here's the real kicker: Nutter is the same guy who develops Duby and Mirah, a static Ruby-look-a-likes. Why does he develop Mirah?

From http://www.mirah.org/:

"No performance penalty

Because Mirah directly targets the JVM’s type system and JVM bytecode, it performs exactly as well as Java."


"so whatever optimization tricks JVM does to generate fast code can be used by a compiler that goes directly from source to x86"

Aren't there some optimizations you can do at run time that you can't do at compile time?

"because Dart paper mentioned in passing that JRuby can't be made as fast as native Ruby, he tears into that because JRuby is actually close to Ruby 1.8 in performance. "

JRuby is much much faster than 1.8.

There are a number of other places you are getting details or choice of words wrong.


Fine, JRuby is much faster than 1.8.

V8 is orders of magnitude faster than JRuby on most benchmarks: http://shootout.alioth.debian.org/u32/benchmark.php?test=all...

Jruby is faster only on one.

That's why I call this nitpicking.

JRuby being faster than Ruby 1.8 proves nothing other than Ruby is not especially optimized.

V8 being dramatically faster than any other implementation of a dynamic language on JVM proves the point of Dart paper: x86 gives much more opportunity for optimization than jvm.

This is even more relevant if you consider the number of people who worked on jvm and the age of jvm compared to the same numbers for v8.


"That's why I call this nitpicking"

I don't think pointing out numerous inaccuracies from technical experts is nit picking. Your larger point may be true, I have no idea, but Charles was pointing out real misstatements about the JVM.


As I'm reading it, Nutter is blatantly misstating what the Dart guys say in order to create most of those "inaccuracies". For instance, he summarizes the Dart article as saying "2. JVM does not let you do what Java cannot do" and "3. A bytecode VM can't support all possible language features".

Where do the Dart guys say that? The entire Dart article is clearly about biases and limitations to _optimizations_ and not about what a language can or cannot do in the sense of Turing completeness. Nutter is creating a bunch of straw men here.

He also decides to ignore all but one of the examples the Dart guys give. They talk about unsigned math, tail call elimination, restartable conditions, continuations, and static/dynamic typing (I would add structured value types). He only responds to dynamic typing because that's where he might actually have a point.


Restartable conditions don't require any VM support that isn't also needed for normal exceptions. Even in CL implementations they're simply built on top of the normal non-local control flow construct (throw/catch) and dynamically scoped variables. And the latter are easy to implement in terms of try-finally even if not supported natively.

The tricky bit with restartable conditions is that you need to support them everywhere, or they're pretty useless. So they aren't going to be very interesting in a multi-language environment like the JVM even if they're easy to implement.


"""JRuby being faster than Ruby 1.8 proves nothing other than Ruby is not especially optimized. V8 being dramatically faster than any other implementation of a dynamic language on JVM proves the point of Dart paper: x86 gives much more opportunity for optimization than jvm."""

Reasoning: you're doing it wrong.

1) That JRuby is faster than Ruby 1.8/1.9 is not saying much.

Maybe x86 Ruby's implementation was not good enough from the start, and JRuby made a clean start. Perhaps you can make a x86 Ruby many times faster than JRuby with the right time.

What it does prove, by doing it, is that you can practically implement a language on JVM to be faster than a x86 language, despite the latter having much earlier head start and dev count.

2) That V8 is dramatically faster than any other implementation of a dynamic language on JVM DOES NOT PROVE the point of Dart paper that "x86 gives much more opportunity for optimization than jvm".

It could be just that implementations of dynamic languages on the JVM are toy projects with small teams working on them, while Google spend tons of dollars on a top-notch PhD team to work on V8. I certainly do now know of a JVM dynamic language with much money and team resources. Even stuff sponsored by Spring Source have much much less resources than Google's V8 team.

Also, check the benchmarks page you point to. Java is right up there in speed after C/C++/Fortran, and much much before x86 targeting v8. That's proof that nothing precludes a VM language being DAMN FAST.

And that's on the JVM, a VM almost set in stone, except of a few recent additions, and with a huge burden of backwards compatibility. Even better things could be achieved with a modern, unburdened, VM.

And the Dart team could even design it around the needs of the Dart language, most people would be happy with that, if it used a bytecode other languages could target.

Oh, and nothing precludes adding an AOT compilation system to a VM/JIT language.


Room for optimization suggests when time is taken you get a faster code. After all this time and all these optimizations the JVM is still terrible for 3D code and zero blockbuster games uses it. Sure, it's possible for JVM code to be on par with poorly optimized code, but that does not mean there is equal headroom for optimization. Worse yet, for web code you can't use the old standby of having the JVM call C or ASM code for truly critical code.

EX: You could probably get a minecraft clone to run faster on the JVM but that's because the game engine is crap not because JVM has any advantage.


"""Room for optimization suggests when time is taken you get a faster code. After all this time and all these optimizations the JVM is still terrible for 3D code and zero blockbuster games uses it."""

I don't think Sun was particularly good at optimizing in this particular task --or even interested.

They worked towards their actual customer use, ie. the server side.

It's not just 3D code, even desktop UI libs (Swing, JavaFX) and multimedia code where left without much (or any) love from Sun (/Oracle).

That doesn't mean a VM is unsuitable for fast 3D. Don't tons of 3D games use the Unity engine, which utilizes the Mono VM?

And it's not like critical 3D parts of the implementation cannot be written in plain old C/C++/ASM -- and exposed to the web programmers via some VM interface (a special set of 3D specialized opcodes? an intermediary 3D extension language + lib other vm languages will have to use? I dunno, it's however totally possible)


Maybe it just means that Mono is much more well suited for games and embedded usage. Which arguably is the case.


Only we're not discussing Mono vs JVM here.

We're discussing VM vs non-VM languages.

So if Mono (a VM language) can do fine in 3D games usage, it proves that nice 3D performance is not incompatible with a VM language --which is what the parent suggested, bringing up Java 3D performance as an example.


Mono VM just demonstrates my point. You can use it to make games with that would have had reasonable graphics 7 years ago but no so much today. EX: http://www.unearthedgame.com/ which is around Half Life 2 graphics which came out in 2004. http://store.steampowered.com/video/220/904

Feel free to look for a better example from: http://unity3d.com/gallery/made-with-unity/game-list

And again I am talking about headroom, it does not take state of the art graphics to make a great game, but it does take a non VM language for the graphics subsystem.

PS: Computers are FAST my cellphone would crush multimillion dollar super computers when I was in high school. So, generally trading speed for nicety's like virtual memory is well worth it. However, that does not mean we are avoiding the tradeoffs just accepting them with open arms.


"""Mono VM just demonstrates my point. You can use it to make games with that would have had reasonable graphics 7 years ago but no so much today."""

Native C/C++ will always be better for games than anything else. We are not discussing that.

For one, I actually think most of those game examples are fine, and better that whatever Dart will attain.

For use inside a web browser those are perfectly fine.

It's not like Dart will magically give you something better than WebGL, which is already hardware accelerated anyway...


"JRuby is much much faster than 1.8."

Why are we even comparing JRuby and Ruby 1.8? Ruby 1.9.3 is the last version and Ruby 1.9 is much faster than Ruby 1.8.

In a benchmark from last year JRuby and 1.9 were about equally fast with 1.9 the winner in most cases. With the recent improvements in JRuby I believe JRuby is faster but I have no idea about how much.

http://programmingzen.com/2010/07/19/the-great-ruby-shootout...

EDIT: Note that Ruby 1.9 gained in performance compared to 1.8 due to the introduction of a new Ruby targeted VM, YARV. Previously I believe it ran directly on the AST.


"Aren't there some optimizations you can do at run time that you can't do at compile time?"

Sure but that not the point. Dart would (just like JS) run on some kind of VM with a JIT. If the would open the code for that VM it would have problems just like the JVM has. If the should open it or not is a sepret discussion.


"""Sure but that not the point. Dart would (just like JS) run on some kind of VM with a JIT."""

Actually that's the whole point, and that's what the Dart guys said they don't want to do and it's not fast enough.

"""If the[y] would open the code for that VM it would have problems just like the JVM has."""

Like what? Name one (1) problem the JVM has had because of making bytecode available as a target.

That other languages trying to run on the JVM had some performance problems and needed some opcodes and stuff is no a problem the JVM shared --it was an optimization request. And the theoretical Dart VM would not even have to have those problems, because unlike Java/JVM, it can be designed from the start to be dynamic.

And it's not like there weren't totally competent language implementation targeting the JVM before invokedynamic and other "nice-to-haves".


"Actually that's the whole point, and that's what the Dart guys said they don't want to do and it's not fast enough."

Im 99% sure that the Dart team is building a VM with a JIT for Dart. There goal is to be faster then JS this will not be possible without some kind of JIT. Like I said, the question is if the have some kind of bytecode (any IR) that can be targeted. How else can the make a it fast? They send the sourcecode from the server to the client there is no time for a fully optimizing compiler.

"Like what? Name one (1) problem the JVM has had because of making bytecode available as a target."

Well the JVM dosn't have any problems the language running on it have problems.

"And the theoretical Dart VM would not even have to have those problems, because unlike Java/JVM, it can be designed from the start to be dynamic."

Dynamic typing is not the only problem of the JVM. Implmenting everything that every programming language needs is not possible. Sure you can design any VM and with time make it better for more and more languages.

This however comes with a price, for example VM complexity. Cliff Click who worked on the original Sun JVM and did a complete rewrite of it for Azul talks about this, why do you think it to that many years until the JVM bytecode changed? Complexity. If you really want to evolve the language your bytecode will have to change witch means the VM must change two. If Dart ran on an open Dart VM it would be much harder to change the VM and the overall implmentation.

My Final points: (1) Designing a fast VM (bytecode, JIT, GC ....) is hard. (2) If you want to make a VM that is good for many languages it even harder. (3) Goole wants Dart not a generic VM (or at least the rather have a fast Dart short term then a generic VM with a Dart impmentation that takes much longer to develop and much more work to maintain.) (4) I agree that we want generic Web VM. (5) Somebody (w3c for example) should start a process to work on something like this. Gathering proposals, encourage smart people to thinkg about this.


> They send the sourcecode from the server to the client there is no time for a fully optimizing compiler.

The solution that the Dart VM and other JS VMs to is to compile twice: you do a very fast simple compile so that you can start executing code quickly and reduce start-up time. Then you find frequently-used functions and recompile them using a more advanced optimizing compiler. This is why many modern VMs have a "warm-up" period: code will get faster over time as it gets recompiled more optimally.


I know. Thats hole point of my post! You need a VM. The guy im answering too seames to belive that dart will not run on a VM. Witch is just wrong!


The position of Dart guys is simple: by compiling an arbitrary language (Dart in this case) to x86 (or whatever native arch the browser is running on) they can get a much faster code than compiling to an abstract VM, like JVM or .NET or parrot.

No, their position is against creating a new VM for the Dart language, and they justify it, in part, by saying it would end up with the same limitations as the JVM. Nutter isn't arguing with this position though, he is just countering their assertions about the JVM.


Except all his argument are by proxy.

Is it not true that "the JVM assumes you want classes, single dispatch, inheritance, and primitives. It assumes you don't need 32-bit unsigned math." ?

Is it not true that "adding new bytecodes increases the complexity of VM? That to add "all possible languages a VM needs to support a multitude of calling conventions: tail calls, optional arguments, rest arguments, keyed arguments, overloaded methods, and so on" ?

Is it not true that "jvm specifies a class file format, a concurrency model (in the case of the JVM threads with shared state), class initialization, and a bunch of other stuff that nails down semantic choices." ?

Those are the actual "assertions" made by Dart paper. Which of those facts about jvm Nutter actually countered?


> by saying it would end up with the same limitations as the JVM

No, they were saying that it would end up with a different set of limitations, and gives the JVM as an example. E.g. if you exposed the current Dart VM's feature set as a byte code VM, ML folks would complain it doesn't support tail calls, Haskell folks would complain it doesn't allow them to efficiently encode their TABLES_NEXT_TO_CODE optimization, etc. Java folks would complain it's integer semantics are wrong (overflow to bignum rather than wrap-around), that it lacks support for arbitrary class loading, etc.


And, as Nutter points out, the JVM isn't nearly as constrictive to other languages as people seem to think it is, despite being designed only for one language.

So, with everything we've learned about VM design since then, and with the explicit goal of supporting multiple static and dynamic languages, you would think that we could design a pretty good bytecode VM for the browser today. It may not be the ideal VM for every language, but it could probably be ideal for a lot of them, and adequate for the rest. A Dart source interpreter won't be ideal for any of them.

As for their "case for a language VM" i.e. inline code, there's a simple solution to that: support both bytecode and inline code. Source is going to get minified anyway, before it goes into production, so you might as well let them compile it to bytecode. Other languages will support inline code by implementing their compilers for the browser, just like CoffeeScript does.


"No, their position is against creating a new VM for the Dart language"

This is just WRONG. The will implment some kind of VM for Dart anyway (maybe the put it into V8 but I doute it). The question is if the should open up the bytecode (some IR) so other people can compile to that. There argument is that this VM for Dart would have limitations just like the JVM has them. Therefore opening the VM for other people is not worth it.


Google refers to V8 as a VM* despite it taking Javascript straight to native instructions with not bytecode or intermediate step being involved anywhere. What makes you think they would mean something else when they speak of a Dart VM? It would serve them absolutely no purpose to load dart (or JS) programs as text, compile that into bytecode, only to then translate that to native instructions. The only advantage would be in the case of having the dart source pre-compiled into bytecode by the programmer -- but at that point you are exposing it for other languages, and it would mean the complete opposite of what this article is saying.

* They even do so on the homepage of the project: http://code.google.com/p/v8/


> Google refers to V8 as a VM* despite it taking Javascript straight to native instructions with not bytecode or intermediate step being involved anywhere. What makes you think they would mean something else when they speak of a Dart VM?

They are right in calling V8 a VM. Your right too there is no need to have a bytecode befor going to native. Its however pretty comon thing to do. I think its a good idea if a VM builds a good bytecode befor running and I didnt thing about the direct way. (Node: V8 sometimes has intermediate steps. Read this: http://wingolog.org/archives/2011/07/05/v8-a-tale-of-two-com...)

> It would serve them absolutely no purpose to load dart programs as text, compile that into bytecode, only to then translate that to native instructions.

Im sorry this is just wrong. To first build a bytecode out of all your code can be a good idea. LuaJit for example always builds bytecode befor going to native. Pretty comon thing to do.


It skipped my mind that Dart is also planned to be a server side language, in which case a bytecode might be useful to skip part of the compilation process on repeated runs. Either way, there has been no word on any of this from the Dart people or Google as far as I know, so I just find your confidence that they will take this approach a bit surprising never the less.


I dont have confidence in that. I just havn't thought about the direct way when I wrote the first text.


I'm one of the two people who wrote the article in question. Thanks for this response. This is much closer to what we were trying to express than the impression that many people (including Nutter) seem to have gotten out of it. It's reassuring that at least some readers were able to make sense of the general points we wanted to convey.

> The position of Dart guys is simple: by compiling an arbitrary language (Dart in this case) to x86 (or whatever native arch the browser is running on) they can get a much faster code than compiling to an abstract VM, like JVM or .NET or parrot.

Not just that, but that's is much simpler and easier to do so. It may be possible to get just as fast with a public intermediate bytecode format, but the amount of work required would be much much greater. Given how much effort it already takes to create a new language and all of the surrounding infrastructure (we're designing the language, iterating on it, writing a spec, building a VM, writing compilers, filling out the core library, writing tools, editors, documenting, evangelizing, etc.), anything we can do to minimize effort is a good idea.


JRuby outperforms most other Ruby implementations today by a large ratio from all my testing of non-trivial code. That isn't the point though. This is still a comparison with either immature JIT implementations or pure interpreters so I don't think it really means much.

I will say that he is right to be annoyed that anyone would claim JRuby is slower since it isn't in all cases I've tested.


I agree with you. Nobody can argue that compiling to the Java Bytecode (and running on Hotspot) is just as good as compiling to x86. Just because JRuby beats native rubys dosn't mean anything. I would be willing to bet that somebody like Mike Pall or other JIT experts could implment a JIT for Ruby that beats the hell out of JRuby. (Not to say JRuby is bad or something, an effort like this would take more time then implmenting JRuby, I'm just saying it could be done).

Sure everything is turing complet but that says nothing about performace. The JVM does not have tail call optimization but you can simulate it on the heap if you want. Same for all the other assumtions that are in the JVM.

From a buisness-perspectve Dart does the right thing and there arguments are valid. The could just open the Dart VM bytecode so other people could plug-in there but that would make it harder for them to change the VM.

I would much rather see a standard bytecode for some generic Web VM. How a bytecode would look is kind of a research problem. There are many other problems to solve befor something like Web VM model would work. The w3c should maybe start a process that goes in that direction (start gathering proposals, make people think about the problem).


This kind of article is always a minefield and a huge bait for debates, even more when it's part of the advocacy for some new programming language (who doesn't love language fights?). It's important to be very precise, adhere to standard and well-known terminology, be very clear about the scope of each claim, have necessary disclaimers about benchmarks etc. or you are just asking for trouble. :) Your (kkowalczyk's) reply has some good points ("...if your feature set fits x86" indeed Charles was tired or something to write that), but it does have several flaws too. You get the concept of "backwards compatibily" completely wrong: adding a new feature like invokedynamic that prevents NEW code to run on OLD runtimes does not break BW compat. Java's track record in BW compat is almost perfect (modulo bugs). JRuby's results in the language shootout are produced with HotSpot Client, that's why they are so bad, JRuby depends massively on HotSpot Server. For benchmarks that critically depend on certain features like fixnums, a VM like YARV will certainly beat JRuby easily; but this doesn't mean "general-purpose VM cannot compete with specialized VM", it only means "VM without special support for feature X may not compete with VM that has X, in a benchmark that stresses X".

The real discussion in the performance aspect of "bytecode VM" vs. "language VM" is much more subtle. A "language VM" will not convert source code directly to native code either; it will use one or more Intermediary Representations, from ASTs to classic SSA LIR/HIR. So it's easy to argue that these are equivalent to bytecodes; and one can just design some bytecode format that is basically an externalized form of your IR, then there's no disadvantage of performance or limitation for two-stage compilation with a JIT. But now the discussion moves to evolution and flexibility. IR forms can be completely changed in new releases of the compiler/VM, because they are internal. But as soon as you're tied to an external representation, you're stuck to evolving that much more conservatively, or with much extra effort, not only for the VM but for the entire third-party toolchain ecosystem that targets the bytecode. This is the major problem that the Dart article should have raised. There's still the possibility of creating some truly universal bytecode - LLVM's bitcode may fill these shoes for unmanaged languages; and something similar could be designed for managed languages although that's harder because a managed heap and memory model necessarily means some set of restrictions that will get in the way of some languages, at least the very lowest-level ones. Dart is Google's attempt to provide a better managed language for the web (without bytecode), while NaCl is their attempt to provide a better support for unmanaged languages (with either native code / NaCl or LLVM bitcode / PNaCl], so they're trying to cover all corners and it will be very interesting to see how each of these developments work out.


"""The position of Dart guys is simple: by compiling an arbitrary language (Dart in this case) to x86 (or whatever native arch the browser is running on) they can get a much faster code than compiling to an abstract VM, like JVM or .NET or parrot."""

And, I believe, the position of the web community at large is: we much rather have a VM than a faster lame language with small chances of success over js.

"""It's not a controversial statement: JVM ultimately compiles to x86 too, so whatever optimization tricks JVM does to generate fast code can be used by a compiler that goes directly from source to x86. The opposite, however, is not true: JVM does introduce an additional layer and fixes in stone many things other than just the instruction set that make compilation of some scenarios from jvm to efficient x86 impossible. And you can't fix that without introducing incompatibilities i.e. breaking existing code."""

A problem which a new "web vm" will NOT have. Being new, and able to go any way it likes...

"""For example, because Dart paper mentioned in passing that JRuby can't be made as fast as native Ruby, he tears into that because JRuby is actually close to Ruby 1.8 in performance. Which, of course, proves nothing. The best JavaScript implementation on JVM is on par, performance-wise, with the old, pre-V8 JavaScript interpreters."""

Actually JRuby was already faster than Ruby 1.8/1.9, and with the latest changed to the VM (invokedynamic etc), is around 3 (THREE) times faster that plain Ruby. Check the recent HN story on that.

"""Nutter latches to the unfortunate example Dart paper gave but completely ignores other examples that do show that targeting x86 is, indeed, 10-20x win for dynamic languages."""

I seriously doubt the "10-20x win". References?


Sign in and be tracked to read an article? Thank you, no.


I'm not signed in, and the server didn't seem to set any cookies..

Here's a plaintext copy: http://cr6nh1.pen.io (though the formatting might be better suited to pastebin)


To counter what others are saying, I was being asked to sign in to view this article.

(Though as @chc points out < http://news.ycombinator.com/item?id=3384888 > I am signed into another Google property on my work email account, so perhaps that's why it's asking me )


i am not signed it, and I still read the article. I keep hearing this complain all the time with google groups. Is it specific to only few or are they based on assumption that it's a google group?


It's a weird bug with Google Groups. As far as I can tell, if you aren't signed into any Google service at the time, it will let you in. But if you are signed into another Google service, Groups will demand you re-authenticate.


Groups doesn't track which articles you read, even if you're signed in.


Let's link to the prettier version of Groups instead of the old, out-dated one... https://groups.google.com/a/dartlang.org/forum/#!topic/misc/...


"Let's link to the glam version of Groups instead of the old, usable one..."

Fixed that for you.


"Let's link to a internally consistent version of Groups that doesn't look like the old familiar one..."

Fixed it for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: