Hacker News new | past | comments | ask | show | jobs | submit login
Ditching 200K of C++ for 30K of Scheme (racket-lang.org)
124 points by swannodette on Dec 8, 2010 | hide | past | favorite | 52 comments



I wrote a small GUI based HTML scraping client about 5 years ago with PLT Scheme. It was a wonderful experience and building a basic cross-platform GUI was pretty easy given the great documentation those guys write along with the language.

The executable was small, and it was easy for the users to pass around a little 1MB .exe file on windows (also had a separate compiled version for OS X) and they used it for years after I left until the web pages it targeted changed (and I don't work there anymore).

If somebody is open to learning Scheme, you can't go wrong building something in Racket, I was pretty surprised what I accomplished in a month.


For other readers: PLT Scheme has become Racket (ship of theseus problem)


According to Wikipedia, the Ship of Theseus problem questions whether an object that has had all of its components replaced is still fundamentally the same object.

<pedantry> If each part is replaced in series, and the identity of the object remains the same after each step, then clearly the object remains the same. For humans identity is thus based on recognizing what you have seen before; it acknowledges mutable state. In this case, if you consider the brand as a mutable property of the object, then yes, the identity of PLT-Scheme and Racket are the same. </pedantry>


Yeah, sorry I didn't make that clear :) PLT Scheme has become Racket... even though its still Scheme.

It looks to me like a lot of the core PLT guys were involved in the R6RS debacle, and didn't like what came out of the process so decided maybe they aren't quite an official "Scheme" anymore per R6RS. Oh the drama...


I don't think so. PLT Scheme was already moving away from R5RS. I think the renaming was to acknowledge that Racket isn't scheme.

That said, I believe the have implemented scheme in Racket. I think PG implemented Arc in it too. It also comes with Algol-68 I think.


"Finally, many Racket tools depend Racket’s “eventspaces,” which are multiple process-like entities in the same virtual machine, each with its own GUI event loop. Implementing eventspaces on top of modern GUI toolkits turns out to be tricky, because the toolkits insist on a single event-loop per process and they cannot tolerate event-loop actions during certain callbacks."

I can not wait until this is fixed. I can't hardly use toolkits anymore because I keep insisting on programming in languages with sane concurrency stories and that drives the toolkits insane.

(I fully acknowledge the scope of my request and further fully acknowledge that this will almost certainly require something written from scratch. And that writing a toolkit from scratch is a tall order. And that now's a bad time to start trying because the alternate concurrency stories are in flux and it's not really a great plan to write something that works in Haskell but can't work in Erlang or Racket. And also now's not the best time because the very graphical underpinnings in Unix are now being called into question and, given that this is a multiyear project, it could build on X only for X to be on its way out.

But still, I can't wait.)


It is fixed in QT.

"Each QThread can have its own event loop. You can start the event loop by calling exec(); you can stop it by calling exit() or quit()." http://doc.qt.nokia.com/stable/qthread.html

I've used this successfully.


it sounds like if you use the racket 5.1, it is fixed! (in that you don't need to deal with that restriction).

That being said, that certainly is still an obstacle if you're preferred lang isn't scheme/racket. Perhaps their wrapper code could be factored out as a new abstraction layer for these gui tool kits?


"We've reimplemented the GUI layer, which meant throwing out about 200,000 lines of C++ code that built on Xt, Win32, and Carbon. We've replaced that C++ code with about 30,000 lines of Racket code that builds on Gtk, Win32, Cocoa, Cairo, and Pango."

That's not the same thing as saying 200K of C++ equals 30K of Scheme.


Right, they could probably have gotten the same effect by staying with C++ and switching to Qt.


but then they'd have 80,000 lines of code of non-racket between the language and the OS's GUI. And they'd still have to build a compatibility layer between existing GUI stuff FFI out to QT.

Seems like this way they suffer through the platform specific pain themselves, but the bugs are all in a language they want to program in.


Of course not. The languages are unrelated, so no precise number of lines in one is equal to any precise number of lines in the other. It is a pretty impressive space savings nonetheless.


"Racket, not Scheme! 30k of Scheme wouldn't have been nearly as useful." — Shriram Krishnamurthi


While I don't doubt racket means less lines of code, if I rewrote one of my older c++ projects in c++, I think I could cut down at least half of the code. Maybe more depending on the project. Rewrites have hindsight that you didn't have the first time.


Especially when you throw in going from Xt (!) to Cairo/GTK - you could probably save 50% of the line count right there.


I replaced about 25,000 lines of C++ into about 9000 lines of Common Lisp. Runs almost as fast, and is incredibly easy to understand and modify.


There is problem on Windows Vista x64 Professional:

ffi-obj: couldn't get "GetWindowLongPtrW" from "user32.dll" (The specified proce dure could not be found.; errno=127)

=== context === D:\p\racket\collects\ffi\unsafe.rkt:176:2: get-ffi-obj* D:\p\racket\collects\mred\private\wx\win32\utils.rkt: [running body] D:\p\racket\collects\mred\private\wx\win32\sound.rkt: [traversing imports] D:\p\racket\collects\mred\private\wx\win32\procs.rkt: [traversing imports] D:\p\racket\collects\mred\private\wx\common\cursor.rkt: [traversing imports] D:\p\racket\collects\mred\private\kernel.rkt: [traversing imports] D:\p\racket\collects\mred\private\check.rkt: [traversing imports] D:\p\racket\collects\mred\mred.rkt: [traversing imports] D:\p\racket\collects\mred\main.rkt: [traversing imports] D:\p\racket\collects\racket\gui\base.rkt: [traversing imports] D:\p\racket\collects\drracket\drracket.rkt: [traversing imports]

[Exited. Close box or Ctrl-C closes the console.]


Please report bugs here: http://bugs.racket-lang.org/ Which version are you using?


The one from the website - http://pre.racket-lang.org/installers/full-5.0.99.4-bin-i386...

I've got it almost working by replacing GetWindowLongPtrW to GetWindowLongW and SetWindowLongPtrW to SetWindowLongW - you have to search in all .rkt files. I also deleted the precompiled .zo/*.dep files (wasn't really sure whether I needed it).


There is a new build at: http://pre.racket-lang.org/installers


Gah, finally. People would judge racket because of how DrRacket looked. Not anymore.


does it look any different?


It morphs into the local GUI appearance.


a before and after picture comparison would go a far way, since I'm on osx, I cant see much difference.


Isn't this more about how good Cairo and Pango are, rather than C++ vs. Scheme?


What I wouldn't give to trade the thousands of lines of C++ code in this legacy app I have to maintain for Racket... can't wait to try the new release. :)


this is THE lisp I recommend to people that are curious about lisp


High level languages require less boiler plate, news at 11.

You could probably replace it with 10k lines or Python or Ruby too, but it would run like shit compared to the C/C++ version, but because this is HN and they are using lisp, this is news.


Define "run like shit". Would 1.5x - 3x the C++ speed be acceptable if your code was 70% more compact and proven secure, not to mention a joy to write?

Not everyone trades efficiency for reliability and comfort. Some people really get to have their cake and eat it too.


Let's be realistic here:

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

I would characterize the differences vs. C++:

1.) substantially slower in all but one case 2.) almost always uses more memory, sometimes drastically more 3.) marginally more compact


Let's be realistic here: Code wrapping graphic toolkits will not be a hotspot. Any heavy graphical computation is likely to be done in the graphic toolkit (or rendering engine, etc.) and called from Racket. That's the whole point.


I think your advice holds in general, but one interesting exception can be implementing a model, in the MVC sense. This can require a lot of communication across the language wrapping, and definitely become a hotspot. This is less of a problem in toolkits that retain state, but then you pay a cost in maintaining duplicate representations.


You linked to the Alioth Programming Language Game — even they admit it's very flawed for determining much beyond how certain programs implementing certain algorithms perform. The game's highly contrived circumstances are considerably less realistic than the parent's anecdotes or the OP's report of huge space savings with little slowdown. In particular, many of Lisp's code size benefits lie in scaling better than other languages. The Alioth game is possibly a pathological case for the things being measured here.


I don't want to get into a language debate here, but... if you put random people's anecdotes above objective measures than you can pretty much make any point you want to make. Alioth may be flawed, but it's not necessarily synthetic in the sense that the benchmarks are real problems or are very similar to real problems. More importantly, it's actually an objective measure of language performance, compactness, etc. rather than people's "impressions".


I have no interest in a religious war over languages either, so no worry about that. I just don't like to see Alioth's one small data point extrapolated into a trend.

The benchmark tests are toy programs that solve a small set of problems under constraints that create a known bias in favor of languages like C++. They are an objective measure of something, but that something is not language performance and compactness is real-world, non-toy programs that solve problems unlike the ones in the game.

Impressions are admittedly not the best way to gauge such things, but they're better than relying on a test that does not make any attempt to address the question at hand.

My personal heuristic is to assume Alioth is roughly right with a largish margin of error, and then look for anecdotal evidence of specific areas that the game does a particularly poor job of reflecting. For Lisps, code size appears to be a large blind spot based on everything I have seen. Lisp's ability to create mini-languages and very high-level abstractions — a large source of its brevity — is pretty much useless on the scale of the Benchmarks Game.


The explicit bias is that the benchmarks game measures time used by programs. If that bias favors languages like C++ so be it.


I don't know if you're trolling or cocksure, but no, that is not what I was talking about. I said the constraints create a bias, not the measurements themselves. For example, the performance measurements are biased against most garbage collected languages because the rules don't allow any options to fine-tune the GC's behavior (which can make a big difference). Obviously, there are no equivalent rules forbidding people from fine-tuning C++'s manual memory management.



Simply looking at the benchmarks game website shows that your general claim "the rules don't allow any options to fine-tune the GC's behavior" is wrong.

Do you have any other claims that can be checked?


No.

"They" say your thinking is broken:

- if you think it's okay to generalize from particular measurements without understanding what limits performance in other situations;

- if you think it's okay to generalize from particular measurements without showing the measured programs are somehow representative of other situations.


I don't have any experience with Scheme, but the argument was on whether a GUI written in Scheme would be unacceptably slow. A synthetic benchmark measuring AFAIK mostly number crunching suggests nothing about GUI performance.


The shootout is pretty much a worst case for Racket, code size wise.


How do you mean? Can you demonstrate this?

Not attacking you, I'm genuinely curious.


Define "run like shit". Would 1.5x - 3x the C++ speed be acceptable if your code was 70% more compact and proven secure, not to mention a joy to write?

No...? Is there another acceptable answer for this? In which cases is such an "upgrade" slowdown acceptable to the customer?


In the case that your app isn't CPU-bound, which is many cases.

[EDIT: Changed most to many. But I think arguments about whether or not speed matters are silly in the general case. Arguments about tradeoffs are always going to be so specific to your app that a general argument is fairly meaningless.]


Most cases for whom? There are plenty of fields in which being cpu-bound is the norm rather than the exception. There are a whole load of assumptions that go into any general advice like this, consider making fewer of them today!


That's like objecting to the statement "Most people have two arms, two legs and one head" by replying "For whom? There are plenty of demographics, such as amputees, where having a different number of arms or legs is the norm!" The fact that unusual things are normal for some subset defined by those characteristics is tautological. It doesn't make the general statement false.


> In which cases is such an "upgrade" slowdown acceptable to the customer?

In exactly those cases where the customer cares more about the added benefits (e.g. security, maybe more features) than speed.

For example, I use firefox even though I've noticed that Chrome is snappier because Firefox comes with addons that provide features that I cannot do without. In this case, I have made a tradeoff between speed and features.


Your customer doesn't give a crap if your program takes 0.03 seconds longer to execute, but he does care if you can't get the bug that keeps bringing his system down fixed in a timely fashion because your codebase is overwhelming.

CPU is vastly cheaper than programmer time. There is a point where the tradeoff becomes unprofitable, but to prioritize code speed over all else is a recipe for terrible software.


When we're specifically talking about how fast your typical GUI app can render toolbars and such, and the app in question sits idle waiting on user input 99% of the time, then that's a wonderful tradeoff.

If you're doing number crunching or heavy 3d, then it requires further thought, but if the code in question isn't one of your CPU usage hot points the conciser code is totally worth it.


There is no general useful answer to your question.

It depends on how the slowdowns translate to absolute slowdowns in human perceptible time. Which depends very much on the kind of problem, and whether a piece of code is on the "critical path" for anything.


1) When the customer can't tell the difference 2) When the customer benefits outweigh the perceptible slowdown 2a) When the faster code is broken (security or other bugs)

kb




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: