Hacker News new | past | comments | ask | show | jobs | submit | gigamonkey's comments login

Hey, thanks everyone for your kind comments on the book. Kinda amazing that we're only a few years away from it's 20th anniversary. -Peter


This book got me hooked on lisp years ago, I can’t thank you enough for writing it. It made both Common Lisp and emacs lisp accessible to me and I would not have learned them if the book did not exist. I hope you consider writing another edition someday.


i remember last year there was some mention about a new edition coming out. is that in the works or planned ?


I plan to. Some day.


Dear God man, please do! I'm into the start of chapter 3 and have suddenly discovered that in fact lisp is not obscure or hard or anything like I was told years ago in high school.

I mean, my introduction to lisp was something to do with concat but it was so divorced from reality that I literally found an early book on C programming and semi-self taught myself it, then found a book called C Pointers and Dynamic Memory Management that literally changed my life.

I have a feeling if I'd read this book I'd have actually done something awesome by now. Frankly, this book might allow me to do something that I've been itching to do for months now.

I can't really express my appreciation enough... but I'll try right now: thank you! A million times over, thank you!

Edit: so Arc - is that like a Webserver based on lisp?


Lisp is a fantastic language and well worth learning.

But these days I pick a language with strong static types every time if I have the choice. I know you can use them with Lisp too to some extent but it's not the same as a language built from the ground up around types.


Totally. I had been thinking about writing a Haskell book but it seems that space is maybe well covered lately.


Arc was Paul Graham's take on a modern Lisp. There are some cool essays on it, but I don't think the software itself ever moved much past alpha stage.


It's the language in which Hacker News is implemented and currently developed.


Doesn't HN use a proprietary fork of Arc?

I've never been entirely certain what the relationship is between YCombinator and the Arc community, if any, but one experimental and partially black-boxed forum doesn't signal a mature language, just one adequate to a particular task.

What else have people done in Arc?


Arc has served as material for a lot of speculative blogposts and helped create buzz for the lisp community.


Being a Lisp, I'm not exactly sure what would constitute a proprietary fork. Furthering my uncertainty is that it [Anarki] is built on Racket and that kinda makes it hard to say one implementation is forked from another.

Anyway, I didn't intend to claim that Arc was a mature language.


Yeah, I just did a quick bit of reading. Ironically, I think I might be groking lisp because it appears ldap filters are lisp expressions, which I've used quite a few times at work...


Hi fellow monkey lisper (or senpai should I say).

What topic are you into these days ? Or maybe you were planning for a sequel to PCL ? in both case really eager to see the resulting book(s).


Well, these days I'm working at Twitter on our abuse problem. So that keeps me pretty busy. I did write another book, Coders at Work, a book of interviews with notable programmers.

I kind of suspect that if I write another book (as I hope to) that I'll continue along the trajectory from pure technical (PCL) to semi-technical (Coders) to something that might reach an even more general audience.


Maybe a Clojure version with similar approach ?


How useful would that really be? PCL has Peter's take on Lisp, and the Joy of Clojure occupies a similar niche if you just want to learn specifically-Clojure.

At one point he mentioned writing a book to the effect of Statistics for Programmers, which sounds intriguing to me. AFAIK it hasn't happened yet, though.


> Statistics for Programmers

Why intriguing? The most general family of distributions can only be expressed with a programming language, so there are certainly many connections between both fields.

Incidentally, that's one of my major complaints about Lisp these days. Lisp-Stat started dying in the early 2000s and completely faded away long ago. I read PCL when it was published a decade ago and I fondly remember how enthusiastic I became about Lisp.

I have used Clojure extensively, but the ecosystem for doing math and statistics is quite reduced. The same applies to CL and Scheme. I wish I could use one Lisp for most tasks.


> > At one point he mentioned writing a book to the effect of Statistics for Programmers, which sounds intriguing to me.

> Why intriguing? The most general family of distributions can only be expressed with a programming language, so there are certainly many connections between both fields.

That seems to be an argument for why it is intriguing. Were you perhaps arguing that it wasn't surprising?


R is about as close as it gets (it is quite lispy underneath all the C-like syntax).


Julia and Wolfram as well.


I was thinking about something completely new. I know one can easily (more or less) follow along PCL and do the solutions in Clojure. I just think that Clojure space needs a book like this, with emphasis on practical. I like the practical section in PCL book at the end, where author builds useful little apps and guides you through the thought process. I know about Joy Of Clojure, great book, but it doesn't contain enough toy projects. I think the goal of the book is to help you start thinking functionaly.


I'm working on a book covering parallel and concurrent programming in Clojure, with the style of building abstractions before actually using them. On the topic of Joy of Clojure, as far as I could tell, the book is not meant to be an introduction to the language, rather meant to be read once already familiar. It's helped me internalize concepts I'd picked up through usage.


Sounds interesting. Where do you plan to release your book ?


> I know one can easily (more or less) follow along PCL and do the solutions in Clojure.

Maybe for the first chapter or two, certainly not most of the book. They're very fundamentally different languages.


Maybe you could warm up by releasing a new, updated edition of Practical Common Lisp? ;)


Practical Uncommon Lisp


Or Practical NonLisp, or Nonpractical Common Lisp.


These imply either inconsistency or impracticality of Lisp, quite an offense ;)


As I think I mentioned in the interview, I've found that if I do this, by the time I'm done with my rewrite, I actually understand the original code too. So if I had to, I could throw away my new (better?) code and still benefit from a better understanding of the original code.


Funny, it's a thing I repressed myself doing, always wondering if static analysis (call graphs and such) wouldn't be better.

<sidenote> There should be a site with a substantial piece of code to discover and people would answer what were the main (3-5) steps they had to do to ~understand it and how long.


I believe Mesa did. On the other hand, Stroustrup claims (in the Design and Evolution of C++, I think) that the Cedar system written in Mesa basically never used the ability to restart as a reason why C++ doesn't have continuable exceptions. I've never been able to track down more information about that than his claim though.


You should check out the book The Myth of the Paperless Office. They report research where they gave folks tasks like writing a summary of several magazine articles and one group did it all on a computer and the other did it on paper and they watched how people actually worked. There was a lot of subtle physical interactions in the paper group, such as moving different articles closer and farther away on the table that the computer group tried to do analogues of and failed because of the limitations of the medium. So it's not just the eyes and ears.


The position of the paper on the table seems like an entirely visual variable. What information about the documents or the content within them did the latter group ascertain through tactile senses?

I'd assume that the amount of information about the content that was acquired tactilely was none. Feeling the paper can only give you information about the paper medium itself, not the ideas encoded in written language on it - conceptual content is non-tangible, by definition.

So the mistake the computer group made was to try to model the human-paper interaction with software, when software isn't made out of paper. They should have attempted to figure out what human-content interaction was being proxied via re-positioning the paper, and modeled that in the software.


Of course in this case Amazon is in the middle, right? Suppose publishers demand to be paid by usage. Amazon can still charge their customers a flat fee--it just moves the risk of mis-setting the flat fee onto Amazon rather than the publishers.

Also, consider the incentives that a flat fee system sets up for publishers and writers: if you get paid per title rather than per reader, you're motivated to flood the market with books not to try to write a few really good books that lots of folks want to read.


Your second point is key: the fixed fee plans compensate poor writers as much as excellent ones. With that argument, I think publishers could sell a variable fee.

Maybe there's a slightly more middle ground--pay based on the number of peak _simultaneous_ borrows in a period. Just as a library would have to pay for each physical copy lent, is it too Luddite to suggest Amazon do something similar? Not pay for each borrow, but based on the peak simultaneous borrows?


By "source materials" presumably he means the raw transcripts of the interviews, etc. Which doesn't seem too likely but it's at least possible.


Yes, that's what I meant. I bet there are recordings too. It would be enormously selfish to keep those for himself, but I agree with you that it's unlikely. However, by asking for it, it seems to increase the likelihood that it will happen, however slightly. The question should be raised. Maybe after Isaacson dies he'll will them to a museum, library or university.


And under what obligation is Isaacson or his publisher supposed to give the raw transcripts of any interviews he did with anyone for the book?


I would expect he is under no legal obligation. But he ought feel a professional obligation to history that others may be able to use the recordings/transcripts to draw their own conclusions. He is a biographer, after all; he never got to interview Einstein, he had to rely on the records of others that were preserved. That doesn't mean the transcripts must be released immediately, but I would expect that others will eventually have access.


How often does this actually happen? I'm not altogether that knowledgeable in this area, as I am neither a biographer nor a historian. But I don't know where the "professional obligation to history" comes from. He's making a commercial product, not doing a research paper for a public university.

And sure, it'd be nice to see his source material eventually. But I'd rather see the coherent package of his research in a narrative book form rather than his notes. I'll trust Isaacson to tell the story of Steve Jobs through his own interpretation, as that is all that a biography can be.


I'm not an expert, either, and I don't know of much precedent for interviews like this with an individual author being publicly released (say, on the web, rather than in a university archive), but private archival materials are used by historians and biographers all the time. I don't know whether Isaacson or his publisher own the rights, but I'm sure others will ask for access in the future.

I'm not sure about characterizing a biography as a "commercial product"; although although it is the source of the author's livelihood, and the publisher is a for-profit entity, it is a complex entity as a product. The author depends in this livelihood in having access to materials from others, it is not unreasonable to think he may reciprocate in some circumstances. In any case, in future years (after the initial sales peak of the bio), releasing the source documents may even cause a revival of interest that helps residual sales.


> How often does this actually happen?

It happens. Example: http://www.nytimes.com/2011/09/12/us/12jackie.html?pagewante...


I would assume they're actually under obligation to not release them. Most people that sit down for these things would rather every word they've spoken does not see the light of day.


Interestingly enough, the reports make it sounds like both Jobs and his wife want everything to be told as is.


I would assume none. But I also assume it doesn't hurt to ask. :-)


Indeed, if you are interested in exploring interesting ways of publishing stuff with great design, get in touch with Adam; he's awesome.


Part of the problem may have been that I was coming more from a perspective of book publishing--the original idea was that CQ articles would be more like short books rather than long blog posts. And books you are paid by quarterly royalties. In fact, my payment scheme was pretty much identical to the Pragmatic Bookshelf model except that a given piece might be appear in multiple places--an issue of CQ, a collection of related articles, etc.

But maybe I didn't make that clear or maybe that made it even harder to find writers since it's more challenging to write long than short. (I did adjust back from my original idea of very long pieces but that didn't make enough difference.)


Good point. I wrote to someone else on this thread - telling someone 8000-20000 words will scare off all but the few people who've already done that level of writing before.


I don't know what, if anything, is going on at Berkeley, but Hal Abelson talked quite a bit about the switch from SICP to a different intro course at MIT when I interviewed him for Code Quarterly http://www.codequarterly.com/2011/hal-abelson/ and it wasn't about abandoning Scheme or "not teaching core computer science". (Well, maybe a bit less core computer science in the intro course but that was basically so that CS people could have more time later for CS without all that annoying EE stuff. And vice versa.)


Yeah, well, Hal has be diplomatic, he can't authoritatively state that the order came down from on high (perhaps above the department) that Scheme was to be totally purged from the base undergraduate curriculum, which to my knowledge was finished not long ago. That's what's behind his statement "And for random reasons we didn't [do the course in Scheme]."

That said, Hal himself does fully support the change from SICP/6.001 to 6.01 according to my sources.

And you're right that much less EE is required of of pure CS majors (6-3s), but I'm told that very few current students do that major, most do the combined EECS one....

To finish, while I don't have time right now to finish reading/skimming your very interesting interview (you do realize you are by far the best interviewer in our field?) as far as I can tell a lot of the deep stuff taught in SICP/6.001 has also been purged from the base undergraduate curriculum. Kinda reminds me of when the Boy Scouts of the USA changed their system in the '70s so that you could become an Eagle Scout without ever having camped outdoors, started a fire, etc.

(My response to that was to drop out of Boy Scouts; here, as a scientist by inclination who just happens to narrow interests in CS which are fairly EQ with SICP I don't exactly have a dog in this hunt (amusingly, my other big interest in this general field is pure software engineering, the fruits of starting out on an IBM 1130 and realizing there HAD to be a better way to do things :-). Although I do seriously wonder about the de-emphasis of functional programming at some of the top 4 in the multi-core future which is today).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: