This is a fascinating project. Haxl is the brainchild of former Glasgow Haskell Compiler lead* Simon Marlow.
The tl;dr of Haxl: what if you could describe accessing a data store (a la SQL) and have the compiler and library work together to "figure out" the most efficient way to perform queries, including performing multiple queries in parallel? That's what Haxl does, it allows you to specify the "shape" of your query, the type checker verifies its correctness, and the library executes it in parallel for you, without the developer having to know about synchronizing access or anything.
It's a great read indeed. If anyone is interested in more concrete applications of Haskell then a read through this should be enough to convince anyone that we can do some really amazing parallel programming on top of Haskell's RTS.
Is it a cultural reference? Found it in bestcomments, so looks like many people do get it, but I don't. Genuinely interested having English as a second language.
I haven't used databases much, but don't most SQL implementations already "figure out" the most efficient way to perform queries? Can't most implementations already perform queries in parallel?
Yes, but this is designed for non-SQL datastores, and arbitrary application code connecting portions of "data acquisition" and "data operation". Imagine if instead of an ORM, you had a single system that weaved together your application code and the database queries, and ensured that where they could execute in parallel, they did.
Yes, but this is for non-SQL systems, and it does quite a bit more. It's more like an ORM in this respect, because it weaves together the query planning and processing part which handles retrieving data, and the execution of operating on that data.
Have you replaced all use of FXL with Haxl? Or are both languages supported? In the latter case, what is the relative proportion of each in the live codebase?
(I appreciate that migrating code that already works to a new language often just introduces bugs for no gain, so please don't take my questions as trying to dig up dirt or anything. I'm genuinely just curious.)
We are still in the process of migrating from FXL to Haxl, and there is too much FXL code to translate manually, so at the moment we are treating FXL as the source of truth, compiling the FXL codebase to Haskell, and running both concurrently to verify correctness.
We previously had a custom DSL and it outgrew it's DSL-ness. The DSL was really good at one thing (implicit concurrency and scheduling io), and bad at everything else (cpu, memory, debugging, tooling). The predecessor was wildly successful and created new problems. Once all those secondary concerns became first order, we didn't want to start building all this ecosystem stuff for our homemade DSL. We needed to go from DSL to, ya know, an L. So the question is which...
If you understand the central idea of Haxl, I don't know of any other language that would let you do what Haxl in Haskell does. The built in language support for building DSLs (hijacking the operators including applicative/monadic operations) -really- shines in this case. I would -love- to see haxl-like implicit concurrency in other languages that feel as natural and concise. Consider that a challenge. I thought about trying to do it in C++ for edification/pedagogical purposes but it's an absolutely brutal mess of templates and hackery. There may be a better way, though.
Did you have a problem with namespace collisions of identical field names in records? Is that much of a problem in Haskell? How did you deal with them? Thanks.
To be completely honest, the namespace/module situation with Haskell could certainly be a _ton_ better, but after 8 years of it I can't ever remember a time when it was ever at the top of my mind as game-breaking. Occasionally quite annoying? Yes, most definitely. But I'd say there are more many more annoying things day to day, and in any case, it is certainly a tradeoff I'll put up with for the returns.
That said, GHC 7.10 will ship with a new extension called `OverloadedRecordFields` that will allow you to have identical record field names for different datatypes. Yay!
I agree with the parent that it is a pain, but not very much of one. My pains in lacking other features when I move away from Haskell are far greater than my pains in name collision in Haskell.
I find records are used most heavily in web development, where you are pretty much just shuffling data from browsers to databases and the other way around. But even there the field name thing doesn't pose much of a problem, I prefer defining my records in the module that handles the functions for it, so there's no issue with conflicting names anyways. I found the fact that 'id' is a standard library function to be a bigger minor annoyance.
As I understand this, this is an easy but still explicit concurrency construction, which is not what haxl does. It's easy to imagine how something like the toy example of friend-list-intersection could be converted to explicit concurrency in languages with good concurrency support. This is an easy optimization to make by hand in the cases where concurrency is obvious at some layer or within some abstraction. The power of haxl comes when the two (or whatever) requests come from wildly different places in the AST. For example, if you combine the result of friend-list-intersection with some other numeric quantity that's the result of some other fetches. Haxl essentially performs this optimization automatically, over the entire program.
This will travel as far as possible through all paths in the AST collecting all IO to be performed in the first round (which, once fetched, will unblock more of the AST, and the process repeats until we have an answer).
I hang out with the Scalazzi and authors of FP in Scala on IRC. Most of them, including the original creator, would rather write Haskell and some are doing just that.
You would only choose Scala over Haskell if the pros (having the java library and running under the jvm) outweigh the cons (having the java library and running under the jvm) :-)
Are you employing the ideas behind reactive programming? And can you explain the types of monads you used for what problem and why? I am writing a paper on Functional Reactive Programming and Haxl really made me curious. The paper (currently in german, but I'll translate it) proposes a new Hypothesis that tries to shred FRP in general, by showing a novel way that solves some of the problems automatically that naturally occur with FRP.
I am really interested in seeing how you solve problems for distributed systems with Haxl and how query sharding is handled etc..
I've wasted a whole day looking for Haxl online a few weeks ago, just to find out that it wasn't released yet. The release really makes me happy :)
Query sharding is at the data source layer, which Haxl doesn't delve into. It's up to each data source integration with Haxl to do the appropriate routing/etc.
Is Bryan O'Sullivan and the team from his Haskell-based startup Facebook acquired in 2011 still there? I sat in on a class of his a while back and remember him ruefully laughing about having to use PHP now.
Is it like a query engine, where you work with the entire query up-front, apply transforms and build a query plan?
Or is it more like an event loop, where you run as far as you can until the code blocks on IO, batch up and send all the pending IO requests, and run further when the tasks you're blocked on resolve?
Part of the beauty is that the actual way IO (note: in this version, IO here means 'reads from the network', almost always) is scheduled is abstracted away such that we could go with either approach w/o impacting client code.
That said, the way it currently works is more like the first. You can think of the entire haxl run (program) as an AST that is given to the execution. It expands as much of the AST as possible (anything that's not IO), and anywhere it needs IO it enqueues those requests to be scheduled. Once it's explored as much as possible, it aggressively schedules the IO (deduping, batching, and overlapping the calls). Once it all comes back, it unblocks the AST where it can, and repeats the process.
This isn't necessarily the optimal scheduling (as you point out, unblocking each part of the tree as each result comes in might be better). It was specifically designed to make it easy to play with this kind of stuff later. Since the concurrency is entirely implicit the implementation is entirely abstracted away.
Have a look at the SQLTap service written by the guys from DaWanda.com (https://github.com/paulasmuth/sqltap). It does basically exactly that for SQL queries but is implemented as a standalone Java/Scala SQL proxy server.
Interpreted code was no longer cutting it for perf reasons, and any time you create your own language you end up reinventing the entire tool chain (debuggers, profilers, etc.). Haskell provides so much functionality in the language itself and has mature solutions to the other issues plaguing us in FXL, so it was a natural choice.
Why do Haskell libraries on Hackage doesn't come even with a single example, getting started, how to use, quick start, nothing, really, just function declarations? This scares Haskell newbies.
The "documentation" on Hackage is almost universally just the haddock-generated files (which is why it's mostly just function declarations and type signatures).
Most libraries list a "Home Page" that more often than not includes more useful documentation (Haxl's, for example, has the things you've mentioned).
I concur, that most of the time, the documentation on Hackage isn't really sufficient, but I've found that for the most part I just use it to find the homepage, and then go there to read the actual documentation.
I agree that it would be nice if everything was all in one place.
"I agree that it would be nice if everything was all in one place."
I actually find "distilled reference with links to source" a fantastically valuable view. I've no objection to providing some sort of combined view, but let's not lose what we have in a quest for consolidation. I've no idea if that's what you meant or not, and don't mean to put words in your mouth of course, just expressing a concern.
I think fundamentally, Hackage is meant to be a centralized package repository. If you look at other similar projects, there seems to be no real consensus as to whether that should just be a launching-off point to the actual project page, or more inclusive.
When I originally wrote this, I was going to say "It's akin to CPAN", but then to make sure I wasn't misremembering, I looked at a bunch of CPAN packages and saw that they were all actually fully-documented (with examples and whatnot).
I think the advantage to having consolidation is that you can then at least try to enforce documentation standards (whether you should, is arguable). What's super frustrating is going to a Hackage page, finding the link to the project home page (frequently on GitHub), going to the GitHub page and then just seeing a barren directory listing of files.
I feel slightly uncomfortable making statements about how Hackage should be set up however, as it's like going to a soup kitchen and then complaining about the specific soup they've decided to give you. "Oh, you mean this community resource we've set up which allows anyone to contribute a package and have it globally available doesn't provide exactly the functionality you'd like? Please tell me more about how the community can respond to your whims."
Also, for what it's worth, all the Haskell libraries that I make frequent use of tend to have very good documentation (you could argue that's HOW they end up becoming the ones I make frequent use of).
It's definitely suboptimal, especially for newbies. The standard response (which I agree with completely) is that the types make API documentation so much more valuable that it's less commonly a concern for advanced users. So there's a bit of a squeaky wheel problem compared with, for instance, Ruby where docs are completely required to understand most libraries even slightly.
In many cases a few examples and type signatures are enough for the intermediate Haskell user. Virtually all packages for Haskell are free software though, so you can of course contribute documentation if you feel there ain't enough, because obviously if the current Haskell users don't need more docs they won't write it.
How do you guys batch requests in PHP? You don't, right? So this is an intermediate layer basically, and it sends requests every few millisecods and waits to batch things in between?
The systems described here isn't directly involved with any PHP code.
Like all Facebook services, they are communicated with over the Thrift RPC system, and may have PHP (or any other language) clients, and may talk to other services using Thrift (or occasionally other protocols), some of which may use PHP.
If you want to have similar query parallelization magic for PHP+SQL have a look at the standalone SQLTap service written by the guys from DaWanda.com (https://github.com/paulasmuth/sqltap).
Cool ! Are there any alternatives out there, especially to deal with fault tolerance especially when if we want to establish connection with 100's of varied databases?
How does the functionality of Haxl differ from a mature ORM system? I'm thinking about .NET Entity Framework + LINQ in particular since it not only does the mapping but also assists in query generation, scheduling.
It would be cool if you could transform those tables into HTML tables. They would look prettier (not nice JPG noise) and would also be more accessible.
Well, that project, Q just doesn't even share the same thesis of Haxl. At all. Two completely different projects, Q couldn't be used for what Haxl is intended.
If you had posted the link asking for a comparison, that would've been different, but your ignorance seemed obvious to me because of the haughty wink at the end of your posted link implying we were being let in on some kind of secret.
The tl;dr of Haxl: what if you could describe accessing a data store (a la SQL) and have the compiler and library work together to "figure out" the most efficient way to perform queries, including performing multiple queries in parallel? That's what Haxl does, it allows you to specify the "shape" of your query, the type checker verifies its correctness, and the library executes it in parallel for you, without the developer having to know about synchronizing access or anything.
Here's a link to their paper (PDF): http://www.haskell.org/wikiupload/c/cf/The_Haxl_Project_at_F...
* - I am not sure if he's still committing, or if he's only doing application development. His accomplishments in Haskell land though, are many.
Edited: I removed my comment about GitHub issues, seems it's a known problem. :)