Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, I was (metaphorically) waving my hand in the air and yelling "Factor! Factor!" for the first half of the essay, but when I got to the end I wasn't so sure. Factor solves a lot of the problems the author describes - it has an extensive standard library, and garbage collection, and all kinds of other useful things - but I think the core complaint about writing Forth code still stands: You still have a data stack, stack-shuffling words like 'dup' and 'over' and 'rot' still make for ugly, hard-to-read code, and re-thinking your expression-graph to be expressible without such shuffling is still very hard.

I still have a local checkout of Factor in my home directory, and I really would like to get around to playing with it some more, someday - but (at least to begin with) playing with Factor sometimes feels very much like hard work.




I don't honestly think that Factor suffers from the same problems enumerated here at the end of the day. I used to dally in Forth in the form of Mops and PowerMops, and my problems were basically the same as those enumerated in this essay. I've tried to get into Factor multiple times; this past time I succeeded, and I have been writing increasingly large amounts of code in my spare time that seem to flow okay. I might even start releasing some of it soon. I think the differences can be chalked up to a few major things:

Real locals. Factor locals are not penalized; they perform the same as a data stack. And while another comment correctly notes that Factor programmers prefer to avoid locals, I'd point out that locals are also used in many places in the standard library. Factor's preference is not the same as Forth's near-insistence.

Higher-level combinators via lambdas. Part of why they have that attitude is that Factor's higher-level combinators are very natural if you're coming from a functional language. A lot of its combinators are things like "run these pile of lambdas against this one object" (kind of a reverse map) or "append all of these lambdas with this extra operation or datum" (think of currying). These are VERY different in practice than dup/swap/rot. I still have to pause and think a lot while I'm coding about how and which to use, but it's getting better. The main hurdle is simply not forgetting anything.

Rich data types. Being able to have a single element on the stack that is an array, or a class, or an expandable vector, or a hashtable, or what have you, GREATLY simplifies things compared to Forth, where I'd have to "just know" that the two top things on the stack entering my function were a pointer to an array and its length or some such.

Much better error reporting. With richer data types comes much better error messages. You didn't just read random crap from memory because you read the stack effect in the wrong order; you tried to call + on a hashtable. Combined with static stack effect checking, and I find that debugging my Factor code is usually about logic, whereas at least half the time, my Forth code was about getting the stack effects right.

----

I'm not saying Factor's perfect. It's not. I use more locals than the core team does in my code, and I'm not currently convinced that's wrong in any sense. But I also get a lot of mileage out of the higher-level combinators, to the point that I can write concatenative code without feeling like I did in Forth that I'm doing the processor's work for it. It feels a lot closer to any high-level functional language, where I'm just composing functions, not jiggling the stack.


I use more locals than the core team does in my code, and I'm not currently convinced that's wrong in any sense.

I think that the big "win" in Factor vs say CL is implicit argument passing.

from the article even:

In order to have really small definitions, you do need a stack, I guess - or some other implicit way of passing parameters around;

As far as I'm concerned as long as locals don't get in the way of that they're groovy.


You can get the same implicit argument passing in J (called "tacit programming") and Haskell & ML (via currying, called "points-free style"). It's optional in those languages, though, which strikes me as a good idea: sometimes it's a vast improvement, but sometimes it makes the code hard to follow for no real gain.


What's the largest program you have written in Factor to date and what does it do ?


The largest thing I wrote was a clone of the parts of "curl" that I use, modifying it slightly so that I can just write JSON to describe what I want to post to a website, rather than the URL-encoded string. This included supporting DELETE, GET, POST, and PUT, and allowing downloading the response only, headers only, or both. I want to add a progress bar and a (admittedly totally unnecessary) GUI, then I'll throw it on GitHub and get feedback on my coding style. I'm enjoying seeing how far I can figure things out without outside help right now, though.

The second largest thing I've written, which I did in a much earlier version of Factor, was to clone enough of Fog Creek Copilot's Reflector that I could use that for testing/developing the Mac versions of the product. That was buggy, disgusting code with piles of stack manipulation, though, and is part of why I quit working in Factor at the time. I wish I still had the code; it'd be interesting to know whether my change of opinion is due to more experience with stack languages, or changes in the language. Probably some of both.


That's pretty cool.

I've gotten interested in stack machines again because of 'brick computing' (re-usable components with a tiny VM in them) and 'fabric computing' (networks of such components, either identical pieces or a variety of them).

It's quite amazing to see how little code you need to get a Forth like environment going on a piece of hardware.


Factor has local variables (implemented as a library) so this problem doesn't really exist. However, Factor programmers try to avoid using locals as much as possible, so in reality the problem does exist. It's almost like some people have a religious aversion to locals. Locals do make factoring harder, and perhaps Factor code can be optimized better without locals, but I think that not having bunches of swaps, dups, rots, nips, tucks (or some of the even more complex Factor stack shuffling words -- you have no idea!) makes locals look very attractive in comparison.


Well, the original article discusses how Forth's local variables don't really help; I assume the same applies to Factor's local-variable implementation.

Factor's higher-level, functional stack-shuffling words (cleaves and splats and what not) are definitely easier to understand than the low-level Forth shuffling words, but I find myself thinking very hard about how values need to be ordered on the stack.


Forth is for people making little jewels that stand all by themselves, usually (as in the example) to get something (anything) high level running on new hardware.

That's a pretty unique niche. And for that niche it is virtually without competition.

But if you're going to write an accounting system in forth your a masochist, it's not meant for that.

I have no idea really what factors natural habitat is, but possibly over time it will come to supplant forth in this role.

Forth, factor and lisp (or scheme) are amongst the languages that are easiest to bootstrap, that alone gives them a right to exist.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: