Hacker News new | past | comments | ask | show | jobs | submit login
Haxe 2.10 - Now with Java and C# targets (haxe.org)
99 points by stesch on July 16, 2012 | hide | past | favorite | 40 comments



The documentation looks absolutely horrible. I can find in many ways how it can compile to multiple targets, but no easy way to find out how it actually looks like. Any "hello world" example that does not contain language reference? Maybe "a gentle introduction to haxe" before "compiler metadata"?



I didn't say it does not contain a hello world example. However it really misses (or at least misses in some obvious place, like the first link) "a gentle introduction to haxe". For example if you go to python.org -> documentation -> there is a clear link "tutorial" with subtitle "start here". Hell, even php has a relatively easy to find "tutorial" as a first part of docs. You guys should really reorganize this in a bit more friendly way, especially that haxe is not (at least yet) an incredibly popular language with tons of tutorials all over the net.


Why not JVM and CLR targets?


Most Haxe targets are source-to-source, with the exception of SWF. It bootstraps on existing toolchains a bit better(this can make a big difference for debugging), and it affords more flexibility in the build process in those situations where you really need to mix in code native to the target. The extra step of compilation to reach the runtime is treated as a UI issue. NME, for example, already has its own build system in order to deal with asset packaging, and it integrates the cpp target compilation alongside that.

This doesn't preclude the possibility of bytecode, but from the Haxe perspective it's seen as an optimization, not a must-have for practical use.


As a student of compilers myself, it's my understanding that producing bytecode is as easy as--if not vastly easier than--producing code in another language, and that this is particularly true when the target language is high-level. The only exception to this I can envision is when the source language maps easily and completely to every single target language, in which case the source language must be the least common denominator, and the compiler may be a glorified awk script.

Significantly, a major feature of the JVM and CLR bytecodes is that targeting them makes it extremely easy to permit interoperation with other code native to the same VM. I'm not sure generating source would be any improvement on this whatsoever. I also question the value of being able to debug generated code; assuming that HaXe is indeed more than a glorified awk script, the process might be compared to using an assembly debugger on C++-generated code. Not entirely useless, certainly, but neither is it precisely desirable.


VMs can have bugs and nuances that aren't captured in their specification - and a compiler has to evolve awareness of this in an iterative fashion. Rendering to source languages constitutes another way to sanity check by going through a compiler that has already dealt with those implementation concerns. So Haxe is taking on a lot of extra work, as you say, but it also gains more assurances that output is "as good as" target-native code.


> VMs can have bugs and nuances that aren't captured in their specification

So can compilers. Adding the additional translation layer of javac or equivalent increases the potential of being affected by bugs in third-party code. Let P be the probability of encountering a bug in the JVM, and Q be the probability of encountering a bug in javac. If we multiply the complements, (1-Q)x(1-P) we have the probability of final execution being in keeping with the HaXe implementation's intentions. Assuming all values are nonzero, (1-P)x(1-Q) < (1-P), therefore the additional translation layer increases the probability of being affected by a bug. Not to mention that the vastly more complex output format increases the possibility of introducing bugs into HaXe itself.

> output is "as good as" target-native code.

But it isn't. Translating code between languages is lossy. Unless HaXe is trivial, the HaXe implementation has information that could be used to generate better bytecode that is lost when bytecode generation is performed by a tool that knows nothing about HaXe, e.g. javac.


> Adding the additional translation layer increases the potential of bugs

> the implementation has information that could be used to generate better bytecode

These are theoreticals. Let me analogize. Some algorithms have a worst-case big O that is significantly worse than the average case. So even though the algorithm could have very poor performance in certain situations, it's better for the "real-world" cases that it gets used in.

Most of the existing Haxe targets have similar external semantics(Algol-derived, GC, dynamic types), so we're in an average-case situation of both reliability and performance; the Q probability is close to zero because the code we input isn't that drastically different from human source input, and the input we pass in from Haxe can be a bit more optimized in most respects, because we don't have to make it maintainable. If we were targeting a really bad compiler, it would blow up. But in practice, it doesn't and we gain more than we lose, even when considering areas where there's an impedance mismatch and we have to, for example, add dynamic types on top.

Bytecode output is more sensible within a short-term view - a single project with known specifications. But it's ultimately the need for flexibility that drives Haxe, and you aren't gaining additional flexibility from bytecode.


> we gain more than we lose

What do you gain? As far as I can tell, there is only loss.

> the need for flexibility that drives Haxe, and you aren't gaining additional flexibility from bytecode.

Exactly what flexibility does this provide?


> it's my understanding that producing bytecode is as easy as--if not vastly easier than--producing code in another language

Maybe true for unoptimized code. However, Java and C# compilers have received years of work to make efficient use of their respective VMs; standing on their shoulders makes some sense.


Really? What kind of optimizations does the C# compiler do when emitting IL? As far as I know, they are fairly straightfoward optimizations - there's nothing really that complex going on there. The C# team has said they aim for a straightfoward mapping to IL.

You won't see the C# compiler inlining functions (even though the CLR does _way_ better with large functions and doesn't handle inlining well). It won't propagate constant expressions. You won't see it transforming recursive functions into loops. Does it even remove unused variables?

All the optimizations that C# really relies on for performance are handled by the JIT, and any compiler following the same patterns will get the same enhancements, plus the ability to emit better IL than C#.

That said, I'm still unconvinced it's easier to emit MSIL than C#, and I'm rather well versed with MSIL.


Maybe things are different on Dalvik, but javac for the Oracle JVM is about as braindead as a java compiler can get; all of the optimizing is done during JIT.


Source to source solves the problem where the only way to ship for certain platform is to compile only with the toolchain provided.

The Unity3D engine for example uses Mono's AOT (C#) to produce a very big assembly file (.S) which later goes through apple's own (or is it gnu's) as (assembler).


Once again, there's nothing wrong with jvm output. It's "better" for the reasons you say. And, there may be a jvm target down the road.

That said, there's a number of situations where providing source code in java/c++ is critical for non-technical reasons... Say, if you want to get your project accepted in one of the various mobile app stores.


> Once again, there's nothing wrong with jvm output. It's "better" for the reasons you say.

So if my understanding, including that bytecode generation requires a comparable amount of effort, why didn't they do that? I don't intend to critique their decisions; I'm interested in why they made them. Their reasoning might well teach me something.

> providing source code in java/c++ is critical for ... various mobile app stores.

Really? I wasn't aware of that. Are we talking Apple's and Google's? What app store's catalog is predominately C++?


> So if my understanding, including that bytecode generation requires a comparable amount of effort, why didn't they do that?

They might. It's just not what they've chosen to do first.

> Really? I wasn't aware of that. Are we talking Apple's and Google's? What app store's catalog is predominately C++?

Well, we're talking the iOS app store here. Objective C is the main language, but Apple also supports c++ xcode projects, which Haxe can produce. The reason why Haxe targets c++ is because it can provide garbage collection through the Boehm libs, and (in general) can be made to fit better with the Ecmascript nature of the Haxe language.

Google is much less restrictive in terms of supported languages, but their java toolkit is very polished.

I also think that source code generation also lets you understand better what the compiler is doing, and how to best take advantage of it. Right now, the java target is very early, but it already has a clever way of handling reflection that is much faster than the standard method. It was interesting to me to read through the generated output, even if it was a little ugly.


Why should it be predominately C++, in order to have an easy time being reviewed?

There's plenty of C++ code in the iOS App Store. I know a couple of people who've developed most of the code for their iOS apps in Visual Studio. They were computer vision apps with just a little bit of GUI toolkit code.


I think this is mostly correct. Afaik, the swf (instead of as3) target was provided because the standard compiler (flash) was closed source. Nicolas Canasse, the Haxe language author, wrote an open source as3 compiler called mtasc, and then used his experience to make an optimized compiler for Haxe to swf.


mtasc was an AS2 compiler, not an AS3 one.

As of Flex3 (~2008) Adobe's AS3 compiler has been open source.

(But haxe is still better than AS3, IMO).


You're right, it was as2. I should've mentioned this was true "at the time"


[deleted]


I am not unfamiliar with HaXe. The only reason I can imagine for the authors having done this is that they did not know better. I am inclined to give them more credit than that, however, so I posted a question in the hopes that somebody else might have insight.


Cross-platform tools often have a single user interface that tends to look foreign on every platform (sencha touch), or they force you to code a different ui for every targeted platform (titanium?).

How is the ui problem is usually tackled with haxe? I tried googling a bit, but didn't come up with a conclusive answer.


The short answer is that Haxe isn't very good here yet.

Many of the companies using Haxe are game studios, and so they just invent their own UI and it doesn't matter if it doesn't feel "native".

In terms of the two approaches you mentioned: both can be used. There are various projects to "reinvent" a basic UI toolkit that works on NME, the cross platform graphics layer. These libraries would do what you mentioned first - have a consistent experience across platforms, but one that is foreign and probably not as polished. No libraries have really gained a huge amount of support for this yet, though it seems there are a few trying.

The other approach - integrating with native UI - is also possible using Externs, which link in with an underlying system. I'm not sure how much use this has had. Some iOS features have externs, and on the desktop wxWidgets externs are around so this approach is possible. If you were to do this I guess you'd try to abstract away as many differences as you can, so that you need as little conditional compilation as possible. Not sure who is trying this at the moment.

As I said though, UI toolkits is definitely not a strong point. My main development with Haxe is for JS / HTML, so I just use standard bootstrap / jQuery elements, which feel "web native" I suppose.


then why dont you use plain html/js instead? skip extra 'translation' step.


Cross plat native look and feel works through waxe: http://lib.haxe.org/p/waxe

The new java target and already available JNI usage targets Android.

There is also http://lib.haxe.org/p/hxffi and a established way to call iOs framework code with hxcpp, which is through the CFFI . You should probably have a read (mostly for Cauê Waneck's posts): https://groups.google.com/forum/#!topic/haxelang/9_FHI5cDue0


As someone else mentioned, Haxe is mainly used for games where the UI is always custom anyways. There are frameworks like ASwing which work with Haxe to give a very flexible UI but it's not a native look and feel.


Will updating haxenme update the version of haxe I am running? Or should haxe be updated separately? Thanks.


If you are already on NME 3.3.3 (latest release), no. It comes with Haxe 2.09. But I am guessing it will not be long before another NME release and it will probably come with Haxe 2.10. Don't know if something would brake if you updated to 2.10 per Haxe installer but am not going to find out. Joshua is probably working at it as we type.


Yes, actually ;)

We're about a half hour away from NME 3.4


Haxe is interesting only because of its targets. No advanced type features at all.


Why does a language have to have advanced whatever features if it does the job, i.e. if it's good enough to write most of a cross-platform app in a maintainable manner while producing efficient code?

Note: I am not a haxe user and would be interested in comments of actual users with respect to the compiler's reliability. My personal conception is that it is quite difficult to define a stable language that defines exactly the same on all targets.


I'm a haxe user. What exactly are you wanting to know?

If by reliability you are talking about it consistently not breaking between versions, then it's fine. Between versions the compiler does add new features but for the most part backwards compatibility has been kept between major versions.

If you're talking about does it consistently produce similar code across platforms, then it depends. Plenty of people are using it for 2D games (targetting flash, Android and iOS) and it seems suitable for this. Outside of that, it basically comes down to this: if you want your code to work cross platform, stick to the standard library or to known cross-platform libraries.

As an example, I had a piece of code which took markdown text input, converted it to HTML, and then manipulated the resulting HTML. The code worked on Javascript, Neko and CPP without modifications, because I stuck to existing cross platform libraries.


I think Haxe is unique in the balance it is trying to find between compilation speed, platform consistency/reach, and advanced language features. My personal observation is that Nicolas and the other language authors seem to prioritize things as I've just listed them.

Compilation speed thing is a "big deal" that I don't hear talked about a lot on HN. Haxe's compiler is so fast that it provides autocompletions by itself. So, completing fields from "using" mixins, constructor inference, structural types, or macro generated methods are all very fast, and guaranteed to be correct. So, perhaps while Haxe's features are not as robust as a few other languages, they feel much more tightly integrated with most coding workflows.


What sort of advanced features are you hoping for?

Genuinely interested, I use Haxe and find it's type system advanced enough for me - but maybe there's awesome features in other languages that I don't know about yet.


User is probably referencing concepts wich are not "advanced" per si, they are simply recent implementations of type concepts, we should keep in mind that these tend to make for not so efficient languages and Haxe is pointed at the other side of that spectrum. User seems to be some sort of Java api aficionado so I wouldn't put much thought to what "advanced features" is a reference to, coming from that background.

User should read up on Haxe and may well leave attribution of novelty to correct adjectivation, reads less pedantic in many ways.


Or advanced anything else for that matter.

Cannese is a really bright programmer though; man built a programming language with an interesting runtime, and manages to use them for his one-man game studio. Not bad.


Nicolas "Cannasse" just left his company, I'm not sure what he's up to next. It should be interesting to see.


Where did you hear/read this?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: