I'm the original author of pixie, and yeah, I'm a bit surprised to see this hit HN today.
It should be mentioned that I put about a year of work into this language, and then moved on about a year or so ago. One of the biggest reasons for my doing so is that I accomplished what I was looking for: a fast lisp that favored immutability and was built on the RPython toolchain (same as PyPy). But in the end the lack of supporting libraries and ecosystem became a battle I no longer wanted to fight.
Another goal I had was to see how far I could push immutability into the JIT. I learned a lot along the way, but it turns out that the RPython JITs aren't really that happy with VMs that are 99.99% pure. At one point I had a almost 100% immutable VM running for Pixie...as in each instruction executed created a new instance of the VM. It worked, but the JIT generated by RPython wasn't exactly happy with that execution model. There was so much noise in the maintenance of the immutable structures that the JIT couldn't figure out how to remove them all, and even when it could the JIT pauses were too high.
So anyways, after pouring 4 hours a day of my spare time into Pixie for a full year, I needed to move on.
Some other developers have commit rights and have pushed it along a bit, but I think it's somewhat a language looking for usecase.
And these days ClojureScript on Node.js could probably be made to handle most peoples needs.
As a somewhat different data point, we've been developing pycket, an implementation of Racket on top of rpython, for the past 3 years, and while it faces many of the same challenges, we've been very happy with the results. The JIT can remove almost all of the intermediate data structures caused by the functional nature of the language, and we support tail calls and first class continuations. Overall, pycket is almost as fast as chez scheme on average, and faster than every other scheme system we've compared with.
I hope this continues to be developed. I love coding in Racket. I remember when I first started coding in Python years ago and then I ran into Racket (Wanting to learn Functional programming) and I even liked Racket coding even more.
Pycket has different performance characteristics from many of the AOT systems we compared against. On average Pycket is ~2x faster than the Racket VM, ranging from ~3x slower to ~300x faster depending on the benchmark. Last I checked, Pycket's mean performance was about 10% slower than Chez Scheme. The cost you pay for this performance is a rather significant warmup time for many benchmarks.
As for libraries, the only major feature that Pycket does not support is Racket's FFI, most built in functions can be implemented pretty easily if missing.
Clojure developer here, was interested in Pixie since you announced it.
The only reason why I have not tried it out yet is
* No mention of installing it with `nix-env -i pixie`, `apt-get install pixie` or `brew install pixie`. Sorry, but I'm that lazy :(. If you make it Linux-only, a mac user can do `docker run -ti pixie -v /src:.` as well.
* No mention of how to connect from your editor. A small Emacs mode which let's you C-x C-e is all that would have made me use it.
It's lazy and dumb, but that is the truth. So for me the only reason for not being a Pixie developer full time are those 2 UX things.
> And it's small. Currently the interpreter, JIT, GC, and stdlib clock in at about 10.3MB once compiled down to an executable.
Oh how the definition of "small" has changed. I actually would like to know how they managed to make something like this so big.
To compare, LuaJIT is about 400 KB, and that includes the Lua standard library, a JIT almost certainly more advanced than Pixie's current one, an incremental GC, and a C FFI.
Neither compilers (well, except C++ ones), nor stuff you usually find in standard libraries, nor a GC should require much code to implement, relatively speaking (e.g. compared to a WYSIWYG word processor). These things are usually small. The compilers for almost every language were < 1 MB in size for the longest time.
I am not saying that Pixie being 10 MB in size is a problem. We have a lot more bandwidth and disk space nowadays, 10 MB is nothing. My point is that a "JIT, GC, and stdlib" package weighing this much cannot claim to be "small" for what it does.
I agree with your point completely. I just want to add that throwing out raw numbers like "10.3MB" or "400 KB" is not very precise. Binaries can vary immensely based on whether they have debug info, string tables, etc. or whether these have been stripped away.
I wrote a size profiling tool that can give much more precise measurements (like size(1) on steroids, see: https://github.com/google/bloaty). Here is output for LuaJIT:
In this case, neither binary had debug info. Pixie does appear to have a symbol table though, which LuaJIT has mostly stripped.
In general, I think "VM size" is the best general number to cite when talking about binary size, since it avoids penalizing binaries for keeping around debug info or symbol tables. Symbol tables and debug info are useful; we don't want people to feel pressured to strip them just to avoid looking bad in conversations about binary size.
Its worth noting that the design of the RPython JIT will always result in a large amount of static data in the resulting binary. The RPython translator basically generates a bytecode representation of most of your interpreter and bakes that into the binary. You can probably expect at least a 2x size increase in the size of your binary. As a reference point, after stripping [Pycket](https://github.com/pycket/pycket)'s binaries are 6.1Mi without the JIT and 16Mi with the JIT.
To compare, LuaJIT is about 400 KB, and that includes the Lua standard library, a JIT almost certainly more advanced than Pixie's current one, an incremental GC, and a C FFI.
Back in the day, Smalltalk was criticized as bloated because one ended up with stripped binary+image of about 2MB. Someone around that time got SqueakVM+image down to just under 400k. There were specialized Smalltalks in company R&D that got their image down to 45k.
My involvement with Smalltalk started in the 90's. Then it still had high penetration in the Fortune 500, and it was a niche language for complex financial programs. It was also used in Energy.
Well, for comparison "Hello World" in Common Lisp, using SBCL on x64 Linux creates a ~70 Mb binary, but nobody would ever claim it's small :-)
Unfortunately, for an arbitrary CL program it's impossible to tell for sure how much of the CL compiler and standard library it will need at run time, so SBCL takes the easy route and just includes everything. Some of the commercial Lisp compilers are a lot smarter at stripping things out and and can create significantly smaller executables.
I usually avoid the issue altogether by not building binaries and running most things from the REPL or using a "#!/usr/bin/lisp --script" shebang line.
SBCL does not create binaries per se, but dumps the program image. While it practically is a binary, it's not equal to an executable that a conventional compiler/linker produces.
That distinction isn't important here, though. It is the reason the resulting file is so big, but for all practical purposes it's a binary that statically links the full CL runtime.
Because no one wrote one. There were some attempts but nothing which would work now. Delivering small applications does not seem to be a focus of SBCL users.
Commercial Lisp implementations like LispWorks and Allegro CL have a treeshaker for delivery, but that's not too surprising, since these products are for developers of applications and the users pay for such features.
In lisp eval does not take a string as its input, but a form, i.e. a linked list, containing the program to execute. Maybe you confuse it with JavaScript or Python eval (or the like) where the function both reads in a string and runs the program it parsed from that string. There are diverse functions to convert a string to a symbol in different lisps.
i think the symbol plus gets evaluated to #<function+> and then is called with 1 2 3 - CL will do apply, of course, not sure about scheme and funcall. I'm totally willing to accept my mental model is wrong, i don't have an interpreter handy.
you could, of course, look for the handful of ways to bind symbols to functions, maybe do constant folding and include a minimal set if the compiler can prove what's actually needed.
but in general i think tree shaking is hard when you can load things like that dynamically.
(funcall #'+ ...) does not do a symbol lookup and does not need to retrieve the function from a symbol in Common Lisp. (funcall '+ ...) would retrieve the function from the symbol.
Treeshakers are generally used for application delivery in Lisp. When you do that, then one usually limits the amount of use of runtime dynamics. Typically you can tell the treeshaker what to remove or what to keep: the compiler, the symbol table, debugging info, etc etc.
Allegro CL and LispWorks have extensive facilities for this.
More briefly, in a half decent implementation, the + symbol is involved in the processing of (funcall #'+ ...) in the same ways and at the same times as it is involved in the plain call (+ ...).
To me Rebol/Red is more lisp than some other lisps out there. It doesn't have so much parens, true, but the idea of like code is data and DLS possibilities are huge things.
First I've heard of Red. Does it implement 'readable' [1] or is it a [parallel school of thought] (can't think of right phrase here but you get what I mean)?
Also has anyone tried to create a 'readable' [1] flavour of Clojure yet? If not, why is that, do lispers consider all the parens to really not be a barrier?
Red is basically Rebol rewritten with an important distinction. It comes with a sister language called Red/System that is basically a lightweight static systems language like C, but a lot closer to Red. Therefore, you can run the interpreter or drop down to native code when you need it. The 1.0 release should AOT compile what it can, then JIT, then lastly interpret what you have to. It is hoimoconic like lisp, but not a lisp. There is no install, just a 1 MB interpreter and compiler that can target a lot of architectures. The draw GUI DSL is pretty amazing. There is a Rebol YouTube vid (Red is 98% compatible) where a guy makes a zillion little cool apps with only a few lines of code each.
Nope, it has it's own notation which I find very close to the Smalltalk one, for example:
red>> a: [1]
== [1]
red>> pick head append a 1 + 1 2
== 2
red>> a
== [1 2]
So the second expression is good example of how things work, this is how it looks if I put parens (which are unnecessary in this case):
pick (head (append a (1 + 1))) 2
Here interpreter/compiler knows how many arguments each `word` (you can think `function`) needs, so `append` needs 2 - series and item to append, `head` only one - series, `pick` needs series and index to get element. But what about `+`?
red>> type? :+
== op!
Operators are infix things of two arguments and they get priority over function calls, that's why `append` is called with `a` and result of `1 + 1` and not with `a` and `1`.
It looks a bit tricky in the beginning, I understand, but it leads to very compact and easy to read code.
This is brief explanation which should help you to start reading and writing Red/Rebol code :)
No, the parens don't constitute a barrier of any kind, really. Reading lisp fluently requires very consistent indentation, though. Then you see where an expr starts and where it ends (with nestings and all) without counting any parens whatsoever.
When writing lisp your editor should help you at least a bit to make things enjoyable. They can help a lot, but doesn't have to help much; I found for myself that if the editor just highlights matching parens, I'll be very effective.
Since discovering paredit-mode, I have found a whole new love for Lisp syntax, and now I won't write Lisp without it.
I have tried a structure editor for Haskell but I found it pretty counterintuitive. I wonder if structure editing just assumes a language with very little syntax.
They said that the interpreter was native. It is - it's AOT compiled. They didn't say that your program was natively compiled. Your program is interpreted - by the interpreter - the native intepreter - and then JIT compiled by meta-tracing the interpreter.
Their terminology is totally consistent with how the field uses these terms.
This technology operates at multiple levels of meta-implementation, so it is easy to get confused what is tracing what, and what is implemented using what at a given time.
They mention Clojure, and the Clojure interpreter is a Java program, which is not native. So there is nonzero information in mentioning that the interpreter is native.
Sorry, but you have absolutely no idea what you're talking about. Clojure produces Java bytecode, which is interpreted (and JITed) by the JVM, which is a native program.
Well yes any real executable is native. The perl executable is indeed a native program. It's a native interpreter for Perl.
But there are also interpreted interpreters aren't there? Which aren't real executables, and aren't native. It would be possible to write an interpreted interpreter for Perl, maybe in a language like Ruby. Jython is a real example of an interpreted interpreter (if you ignore that the JVM has a JIT, but it isn't AOT, which you've said that you think is an important criteria).
And so it isn't useless information.
The distinction is particularly relevant here because the RPython technology they are using to build their interpreter means they can either interpret their interpreter using the Python interpreter, or they can make a native interpreter by compiling their interpreter to native AOT. That's probably why they used that particular wording.
So, again, not only is their terminology consistent with the rest of the industry, they are also making a specific and interesting point here, and it isn't useless information.
I think it's meant as in comparison to Scala and the like, which run on the JVM, and thus aren't "native" in the same way (right? or am I misinformed about Scala?). There's a lot of languages that sit on top of the JVM at this point, so they might have seen it as a distinguishing characteristic.
Sorry, no, this is nonsense. Scala source code is compiled to Java bytecode, which is interpreted and JITed by the JVM. Pixie source code is compiled to the Pixie bytecode which is interpreted and JITed by the Pixie VM. chrisseaton claims that "native" refers to the interpreter, but this is rubbish ... that's not what the industry means by "native".
Not strange at all. The main language inspiring pixie is clojure, whose interpreter runs on the JVM. I might even go so far as to say that the main reason to use pixie is because you want to write clojure but can't afford to use a JVM for the task at hand
If we're going by the traditional idea of "bytecode interpreter" or "threaded interpreter", clojure is emphatically not one of them. It JIT compiles the lisp to JVM bytecode; the equivalent in Pixie would be JIT compiling to native code.
IIRC with RPython/pypy the two techniques are interleaved, but most users I've seen opt for some JIT optimization.
Sorry, but that's nonsense. "native lisp" means that the lisp source is compiled down to machine code. To claim that an interpreter is "native" is to misuse the terminology.
Probably the language that I'm most looking forward to. It'd give you the ease of python with the speed of a native language and the best GUI DSL I've ever seen.
Edit:
Agreed that I don't like the author's use of native either.
Any language claiming to be a fast Lisp should be required to show benchmark comparisons against SBCL and a few commercial Common Lisps.
A native compiler for Clojure would be an interesting project, but completely new languages competing on speed or size are going to have a really tough time beating the existing Common Lisp implementations, not to mention the CL library ecosystem.
Not to say the CL library world is very big compared to Python or Javascript, but it has most of the important bases covered, and it's certainly bigger than a brand new language like Pixie.
Alas, this ambitious project appears to be not currently under active development.
My largely uninformed armchair opinion as to why, is that the author is very performance-driven, and in the end it's very difficult to beat the JVM performance-wise. Lesson: if you want high-perf Clojure, you already have it on the JVM.
Personally, I think there's room for a simple small native Clojure implementation where performance is not top-priority. Small footprint, quick startup, access to native C libs. Still holding out hope for that one.
I would love to write a JIT for it, but currently its a half done interpreter. You compile Clojure to Bytecode (of our own design, see clojit-doc) and a C interpreter (we started in Rust but the pre 1.0 chances and some other stuff killed that).
All of this is inspired by LuaJit, specifically my goal is to work on a tracing compiler. It just takes to much time to get there.
I'm generally in demand of small, quick startup, easy-to-embed languages that aren't Lua (I can't stand Lua; I feel like every time I've had to deal with Lua code was far more painful than reasonably possible). There's mruby and maybe a couple others. I was hoping Pixie might be another contender in that space, but it doesn't seem to be after all (or if it is, it's very poorly documented to that effect).
I've been tinkering with some ideas for another language in that space (the Crafting Interpreters book is looking to be pretty helpful in getting me there: http://www.craftinginterpreters.com/), but it'd be really nice if there were enough options for me to not feel the need to create my own.
A list that I maintain might help you find one: https://github.com/dbohdan/embedded-scripting-languages. Personally, I am a fan of Jim Tcl (http://jim.tcl.tk), especially for when you need a small interpreter that knows how to interact with its Unixy environment (i.e., the file system, processes, sockets and UTF-8 text data) out of the box.
In the last, I have been very happy with Gambit Scheme for building small pre-compiled executables for command line utilities, etc. A lot of people use Chichen Scheme for the same purpose.
It is not written in Python, it is written in RPython:
> So this is written in Python?
> It's actually written in RPython, the same language PyPy is written in. make build_with_jit will compile Pixie using the PyPy toolchain. After some time, it will produce an executable called pixie-vm. This executable is a full blown native interpreter with a JIT, GC, etc. So yes, the guts are written in RPython, just like the guts of most lisp interpreters are written in C. At runtime the only thing that is interpreted is the Pixie bytecode, that is until the JIT kicks in...
In a Lispy context, I find brackets confusing. But then again, I don't know Clojure.
(To me, it is a context thing, I think - in lagnuages like C/C#, there parens, brackets, parentheses and even angle brackets flying around, and it is not a problem at all.)
What I meant was that [] when glancing just means the same as '().
I was thinking about it over the day, and realized maybe having special syntax actually is confusing - maybe "normal" lisp with only parens is better. Thanks for the eye-opener.
The idea behind the Lisp reader approach is that custom syntax is used for reading/printing objects of different types, not to demarcate syntactic elements. For example, double quotes are for strings, #P"" will read pathnames, and so on. In Common Lisp, some characters like [ and { are reserved for the user, meaning that no conforming implementation defines a custom syntax based on those characters. And you can define [a b c] to mean (vector a b c), which will build a vector, when executed, to hold the current values of a, b and c. The existing #(a b c) vector syntax is a literal vector that contains symbols.
From this point of view, for Clojure, I don't think it is a bad idea to have a short syntax for vectors, set and map literals. But then, those objects are used when representing code, not because it has a real added value but because the visible syntax is a little bit nicer (maybe, maybe not). That IMO complexifies tools that work with code and does not really fulfill a practical purpose. If that was for practical reasons, I think bindings would had been better defined as maps: as far as I know, they seem to fit more naturally than vectors for this task. For example:
(let {a 20 b 10} ...)
Somehow when compiling or interpreting the code, you could do "(get symbol env)" where "env" is the surrounding lexical environment associated with the let, which would be computed partly from a parent lexical environment and {a 20 b 10}. The environment and the map could be of the same type and be easy to combine.
But there is no such consideration, and it is actually a good thing that there is no link between how the code is represented and how it is interpreted (also, there can be different interpretations).
Basically, I find Clojure a little bit confused about its use of external data representation for code.
I would have preferred a simpler syntax for code and keeping those notations for data, where they are self-describing without additional context.
Vectors eval to themselves, while lists eval to a function/macro/special-form call.
Vectors also differ from lists in that they are ordered and indexed, so in macros and special forms they tend to be used to represent positional bindings.
This has nothing to do with performance.
Also, there is no such clear distinction in how vectors and lists are used. You still require context to know if a vector is evaluated to itself or not, or if a list is a call. Without context, you can't say looking at [x y] which of x or/and y is being evaluated:
I use [] only in certain spots for my scheme code. For binding variables in a let-block and for cond clauses.
(let ([a 5] [b 6]) (display a+b))
I find this easier to parse mentally, but I can see why one wouldn't do it, especially for cond clauses where you might end up having to actually browse parens to add or remove before or after the ].
I use paredit now, so there is really no need for me to do think about parens much at all, but old habits die hard.
I found a useful way to deal with start up times and enable fast command line driven scripting. Start up Clojure and create a socket repl server. Write a bash script that passes a string of Clojure code to netcat, which feeds that into the socket repl server. All the Clojure code embedded in the string does is load a path as a clj file and binds any command line arguments to command-line-args. The arguments are pulled by the bash script and inserted in the string of Clojure code. With the bash script in your path, you can now add #!/usr/bin/env name-of-bash-script-in-path at the top of any clj file, make the file executable, and then execute it instantly like any other bash script but now the full power of Clojure.
I found it useful to override the print and prompt method so that they are silent, requiring the script to explicitly print to stdout/stderr if desired. I reuse the socket-repl reader to enable exiting via :repl/quit which is echoed in the bash script after the script is loaded.
The obvious downside is you have to start at "server" before any of this works but I think the main interface limitation was the inability to interact with Clojure from the command-line and with other command-line tools.
If this is a major concern for you, there are several clojurescript projects--Planck and Lumo--that have very fast startup. JVM Clojure is a bit faster than jsc/node clojurescript (~2x IIRC) but that shouldn't matter much for scripting/automation and lambda functions or equivalent on other clouds.
A big one is persistent data structures. Where 'persistent' may not mean what you think it means.
In Clojure it is impossible to surgically modify a data structure. That is, you can't do something like:
(SETF (CAR (CDR x)) 'foo)
which would alter a data structure.
You can modify a data structure, but it returns a new data structure, yet the old one remains if it is not GC'able.
All of the common data structures have this property. Sequences (eg, lists), arrays, maps and sets. If you change the 50 thousandth element of an array, this returns a new array. The old array is unaffected. Yet it gives the performance you expect of an array. (Meaning no apparent cost of copying.)
If you're writing a search procedure, it is trivial to transform one chessboard into a different chessboard, but without concern about the cost of copying (close to zero), or having altered the original value (you haven't). Other variables that have a pointer to that first chessboard don't see any changes.
There are other things such as a great story about concurrency.
The data structures in Clojure are built on shallow 32-way trees. When you "change" a map or a vector, the algorithm only copies from the root node down to the parent of the leaf you're changing. That's log32(N) copied nodes in a tree of N nodes total.
I'll give you another. Laziness. It's built in. It's trivial to create lazy lists that generate their contents as you walk down the list. Even infinite lazy lists. The list of all prime numbers. The list of all fibonacci numbers.
You can use map, which if I remember my CL, is like MAPCAR. But instead of map, you can use pmap which will do the processing on all of your cpu cores.
On the one hand, when you want to parallelize, you are delegating tasks to workers and you want them to do what they need, independently of you. But on the other hand, laziness introduce a dependency from you, because workers cannot produce a result before they are sure you really need it. This basically slows down parallelization and that's why I am not sure Clojure's pmap is a good general solution.
As someone who loves both PyPy and Clojure, I wish Pixie was right up my alley. But I don't understand why the choice was made to be ... sort-of like Clojure, but not all the way, as opposed to just-another-Clojure-implementation. Why can't I have a .cljc file that runs on Pixie?
(Not voluntelling halgari to do things! I would just like tn understand.)
I tend to agree... but doesn't 'all the way Clojure' imply at least some exposure of an underlying runtime, be it either Java (Clojure) or JavaScript (ClojureScript)?
I'm admittedly a bit behind the curve wrt the latest developments in the Clojure along these lines.
Yes! Being able to call PyPy things natively would be a godsend; PyPy is a great runtime and already has a good Python implementation. It has significantly fewer compatibility issues than, say, Jython.
As always, for all these small languages (MicroPython comes to mind, but even "real-world" languages such as Lua) - unless they grow a debugger, they'll always be silly toy languages nobody can use for serious work.
I'd settle for a gdb backend, really. But printf-debugging is unacceptable.
I like it, will definitely take a look at it. The name is also good, it reminds of its similar cousin Pico lisp and the website is nice. Now we need libraries, many of them, and better docs. Good work!
What's the state of development of this? Did somebody use it with Raspberry pi? I want to use this over Clojure(only on pi) because I have read Clojure is slow on Raspberry pi(even 3, not sure how true that is).
Also, how fast is this? How does it compare to other lisps(or schemes) in terms of speed? Can someone port the benchmarks to https://benchmarksgame.alioth.debian.org.
I posted the link, but am not one of the developers. This got a little attention a couple of years ago, and I thought to look it up today. It has a nice website now at least. I really like the idea behind it (small install, lightweight, fast), but don't know how effective it will be without a larger community. I can honestly do without a lot of libraries as long as there are decent built-ins.
I really wish Pixie just had a REPL and compiled to native code via LLVM so I could give coworkers a .exe.
Edit:
Just looked at the github page. It doesn't look very active to me although I wouldn't say dead. The only small lisps I know of in development are Picolisp and Newlisp.
Do you use it though? I really like the concepts, but the speed could be better (I know Alexander talked about this), and docs could be more beginner friendly. It's a small expert community.
I have used it in commercial embedded hardware shipping tens of thousands of units, in the USD 1000 range. Speed is excellent for the most part - where it's not it's trivial to call out to C libraries. Agree about docs but what would really help at this point would be more QA on Stackoverflow. And a good Windows port.
Agreed on the Windows thing. I tend to stay away from Cygwin. I like the new Picolisp site btw. The code on Rosetta code is terse, and uses slightly different idioms than I'm used to.
Cygwin is great for processing data and generally have a lot of tools one might be used to from Linux. But when targeting Windows, that is, creating a program FOR Windows, maybe for general distribution, cygwin feels very cumbersome to me. PicoLisp is MIT licensed and small, so could be an integral part of a Windows program. PicoLisp has promise on Windows, but someone would have to step up and maintain a Windows package for it. I have been daydreaming about it but I have too many half-finished projects under my belt to fool myself into starting that too.
Come to think of it, some docker and other fancy container magic could also help adoption. And other such things people use nowadays. Maybe a nodejs integration? Stuff like that.
They say WSL apps can not interact directly with Windows apps. So yes and no to your question. I would say it would be similar to running in a virtual machine but more integrated.
I have used Chicken Scheme on Raspberry Pi to good effect. Clojure is likely slow because the JVM runtime requires much more disk access than compiled-to-C systems.
Have you tried Clojurescript on nodejs for Raspberry pi? We have used nodejs on Raspberry Pi with success. I've wanted to try Clojurescript on it, but our shop doesn't use it. I imagine it would work great with it's good JS interop features.
I run some very simple Clojure [1] on an RPi 2 as part of a home monitoring system.
Startup is considerably longer than on other platforms (30 sec or so), but I haven't had any issues with runtime performance, and having a Lisp running on what is essentially an embedded platform is very useful.
(It'd be easy to make the argument that my agent code doesn't do much, but it doesn't diminish the fact that Clojure is able to usefully work on an RPi. :-))
Not that I know if it helps, since you seem predisposed to Clojure, but CHICKEN Scheme on a Pi also works quite well. We even have direct bindings to GPIO and the like, since someone has gone ahead and packaged a library that speaks to the GPIO through the wiringPi project (http://wiki.call-cc.org/eggref/4/raspberry-pi-gpio).
Why does the web site refer to "Lisp interpreters?" That phrasing perpetuates mistaken assumptions about Lisp-family languages, both about their implementation and their performance.
In my experience, whether code like summing integers takes 6 instructions or 600 doesn't influence the speed of 95% of code and 95% of systems.
If you could magically port real Python or ruby or JavaScript code to pixie, but keep the same algorithms and architecture, I doubt it would change much.
Slowness is more influenced by things like data structure layout, allocation patterns, serial vs parallel I/O, context switches, and just plain not understanding your code once it reaches a certain size.
There seems to be a fetish for JIT compilation in a lot of new language designs and it confuses me. Julia is probably the language making the best and most appropriate use of it. It actually has good data structures and types which complement it.
What do you mean in your experience? Summing integers can take two orders of magnitude more time and it doesn't matter? I think it goes without saying in discussions of performance, we narrow our focus to performance sensitive applications and not CRUD database web frontends. I can tell you in the vast majority of performance sensitive numerical applications, that would matter. It would have to be massively disk bound for it to be overshadowed completely by I/O. Just saying. :)
I don't follow. Why aren't database web front ends performance sensitive? They seem like one of the slowest things out there, and some of the most widely used. My comments here on Steve Souders' realization is basically what I'm talking about: https://news.ycombinator.com/item?id=13346635
If you want to make systems fast, you work on the bottleneck. I'm saying that people think too often that summing integers is the bottleneck, when it plainly isn't.
Although I have worked in the domain of numerical applications, thus my nod to Julia. Anybody who works in that domain isn't going to be using something like Pixie; it's too impoverished in terms of types and data representation.
It should be mentioned that I put about a year of work into this language, and then moved on about a year or so ago. One of the biggest reasons for my doing so is that I accomplished what I was looking for: a fast lisp that favored immutability and was built on the RPython toolchain (same as PyPy). But in the end the lack of supporting libraries and ecosystem became a battle I no longer wanted to fight.
Another goal I had was to see how far I could push immutability into the JIT. I learned a lot along the way, but it turns out that the RPython JITs aren't really that happy with VMs that are 99.99% pure. At one point I had a almost 100% immutable VM running for Pixie...as in each instruction executed created a new instance of the VM. It worked, but the JIT generated by RPython wasn't exactly happy with that execution model. There was so much noise in the maintenance of the immutable structures that the JIT couldn't figure out how to remove them all, and even when it could the JIT pauses were too high.
So anyways, after pouring 4 hours a day of my spare time into Pixie for a full year, I needed to move on.
Some other developers have commit rights and have pushed it along a bit, but I think it's somewhat a language looking for usecase.
And these days ClojureScript on Node.js could probably be made to handle most peoples needs.