Hacker News new | past | comments | ask | show | jobs | submit login
Ted Nelson on What Modern Programmers Can Learn from the Past (2018) (ieee.org)
110 points by iloverss on Aug 19, 2022 | hide | past | favorite | 89 comments



Little known fact: Ted Nelson worked at DataPoint in the early 1980s, for a couple years. How many people have even heard of DataPoint anymore? They were hugely successful in the 1970s with their fancy terminals, arguably had the first personal computer, and viewed Ted as a prestige hire. It didn't work out well as they faded into obscurity.

He's mentioned in the third person in my book https://www.albertcory.io/the-big-bucks. but he doesn't appear as a character.


Fun fact: The Intel 8008 was intended to be a single-chip CPU for the Datapoint 2200 terminal. It was delayed so ultimately wasn't used in the Datapoint 2200, but Intel had an agreement in place to sell the chip to other buyers.

So that means every x86-64 PC sold has an ISA with a compatibility lineage that goes straight back to Datapoint.


More fun facts from "Programming the 80386", Crawford and Gelsinger (chief architect and designer of the 386 respectively), 1987, pp. 1-2

"The 4004 was quickly enhanced to the 8008. These devices, very trivial by today's standards, were novel but hardly taken seriously as computers of any worth...In 1978 the third generation was introduced--the 8086. This marked the beginning of microprocessors as 'real' computers...The 8088, the little brother of the 8086, was used in the IBM personal Computer...and this launched the personal computer revolution."

I wish someone else would have read that for me.


I think that's a pretty Intel centric view of things. Well before 1978 there were the 6502 (1975, MOS Technology) and the 6800 (1974, Motorola). Those were very much 'real' computers compared to the 8008.


Yes, I was surprised to see marketing, even criticism of their own previous products, in a programming guide. Once past the "history lesson" the book is actually very good--with concrete examples on building an operating system and a coherent explanation of IEEE 754 floating point format. Hardware floating point for x86 used to be offered as a separate coprocessor chip--the x87 series--otherwise it had to be emulated in software.

I would also argue that the 8008 was a real computer. The ENIAC was the first "real" computer (in the sense of general purpose, digital), with its ~.005 MIPS (integer add/sub) vs. the 8008 at ~0.05 MIPS, and was a bit larger and more difficult to program. I wrote programs on a 6502-based Apple years before the IBM PC appeared. I guess that was a phantom computer.


There's a book on Datapoint that I read, as research for my book. I read these things so you don't have to :)

I'm also in touch with Gordon Peterson, the chief architect of ARCNet, which was a pretty decent LAN, way ahead of everyone else.


A glut of very affordable, yet relevant, books from days of yore--many scanned, some still retained as physical treasures--marks the nascent field of computer science.

I read and re-read parts of "Principles of Program Design" (1975) and type little C programs from King's "Modern C" (1996).

The new wonders may come from familiarity with bare metal, algorithms, and the desire to build the "virtual machine" to solve each problem.

It's a slow process. I draw out the boxes and stare blearily at COBOL examples. M.A. Jackson is the invisible guide, silent but for his proclamations.

Each (non-UML) schematic logic accompanied by a single anxious thought:

Am I doing this right?


Two TN vid's on YT without IEEE's player ...

https://www.youtube.com/watch?v=edZgkNoLdAM

https://www.youtube.com/watch?v=i67rQdHuO-8

80 TN minutes at the 2021 Vintage Computer Festival East

https://www.youtube.com/watch?v=7vb0FwsMqn0


Wish Lex Fridman would interview Ted!


Thought about this few times as well. This has to happen!


Instead of Ted Nelson telling the world what everyone can learn from him, I would like to hear him talk about what he learned from the real world.

Xanadu is probably the biggest vaporware in the history of computing.

Maybe some lessons learned about how he had all these “brilliant” ideas but never actually made something that actual people actually use.

But that would require humility, introspection, and self-awareness so it is not likely to happen.


...(Nelson) never actually made something that actual people actually use.

Actually, he did. He wrote and self-published the big book Computer Lib/Dream Machines, which was ubiquitous and influential in the 1970s. More about that book and its influence here:

https://news.ycombinator.com/item?id=22176769


History is full of people who considered themselves geniuses. Some of them really were.

Nelson got his ideas out there and other people used them in various ways, often without giving him credit. Bill Atkinson was one (https://www.youtube.com/watch?v=FquNpWdf9vg) and he wasn't the first.

at 1:35 in that video, Gary Kildall does give him credit, along with Doug Engelbart.


Always when Ted Nelson shows up, this point is made one way or the other. I even once heard Vint Cerf make these points at a conference.

1. Xanadu being vaporware 2. His ego

What I really don't get is what Ted Nelson does to trigger this anger in people.

E.g., his book computer lib/dream machines inspired so many, and I find him super self-aware. His appearance in Werner Herzog’s movie about the internet was (IMHO) so refreshing.

I love Ted's work, and it's a huge inspiration to me in my daily work. This video just adds to that.


On one side the Xanadu implementations over the decades are a complete failure. On the other side, imho some of the ideas will survive and one day make it into successful software projects. Good ideas aren't destroyed by poor excecution.


Transclusion is a very good idea from my point of view. I saw and used it in Ivar Jacobson's Objectory tool and eventually also implemented it in my CrossLine and other tools (see https://github.com/rochus-keller/CrossLine, https://github.com/rochus-keller/FlowLine2/, etc.).


Agreed! I also use it frequently in my wiki (I use TiddlyWiki).


Then again as Steve Jobs said:

“ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.”


In terms of quickly making money this is true.

But here I'd like to think about likely world changing tech. As long as the (successful) execution comes one day, it doesn't matter if it happens 20 years ago or 10 years from now.


> ideas are worth nothing unless executed.

Maybe if you only have a business focus. He was apparently not interested in fundamental research.


Even in research you have to "do" something with an idea, at least hang your hat on a specific prediction, or carry out the experiment.


If people didn't have the ideas or write down their thoughts of numbers, mathematics where would science, physics, electronics be.

We stand on the shoulders of giants


Yes and no. You won't reap any benefits if you fail to create something useful with an idea, but if you were the first person to get the idea out in the world and other people made it useful, you still created value by helping inspire the other person to pursue what you did. Just probably won't get any monetary gain FROM that.


At least in the business world, I think the value of getting ideas out is vastly overrated. Someone else has to choose among a flood of ideas, which one is worth pursuing. And they may have had the same idea themselves but kept it to themselves.

I'm a bit adamant about this (and willing to eat the downvotes -- fair is fair). Every individual contributor (IC) has experienced credit for something they've worked on being given to a colleague or manager who comes in at the last minute and claims to have blurted out an idea at a meeting.

I've gotten plenty of credit for ideas -- 20+ patents. What I'm proud of is the part of a patent called "reduction to practice," where you figure out the details that prove an idea to be workable, and typically result in additional claims. I've informally adopted a standard that an "idea" has the same requirements as a patent, specifically a study of background information and an explanation of how something can be reduced to practice. I'm super generous with credit, especially towards my junior colleagues.

Ideas are a dime a dozen. And as a musician, I got over the difference between "doing" something, and making money, long ago.


There is quite a difference between fundamental and applied research.


I've done both. Even in fundamental research, you have to "do" something in order to make an idea worth considering. Halley didn't just jot down that a comet might appear in 1758 [0]. He didn't just suggest that a comet could be used to test Newtonian mechanics. He had to work his computation from start to finish, find and fix the bugs, show convincingly how his result actually followed from the theory, etc. And computation was hard back then. Maybe 1758 wasn't the first answer he got.

Likewise other theoretical and fundamental work. Note that this is aside from the question of who gets "credit." Fortune tellers take credit for predictions. This is about what it takes to do good work under most circumstances.

[0] Wikipedia


Mathematics, philosophy, theology and science began as ideas. Even if you aren't the person to develop your ideas, they still have value to civilization.


I agree except for that last sentence. If you were to delete it the whole comment would be much better, IMO.


but we now have roam research e.g.


The subtitle says Ted was the inventor of hypertext, but in the interview he credits Doug Engelbart. You can hear him say it if you listen from 4:30 to 4:45.

"We thought computing would be artisanal. We did not imagine great monopolies."

Even though hindsight is 20/20, I'm still surprised that the power of network effects didn't occur to him (or whoever the "we" refers to).

"... artistic expression ..."

Reminds me a lot of when Jaron Lanier talks about how he envisioned VR.


Ted coined the term "hypertext"; he built a prototype Xanadu platform [1] which incorporated it. Engelbart created an editing system which used something like hypertext internally. Ted coined the term in 1963, along with "hypermedia" [2]; Engelbart published a paper in 1962 [3]; you can read it and decide for yourself whether he "invented" hypertext (by any name) in that paper.

[1] https://en.wikipedia.org/wiki/Project_Xanadu

[2] https://en.wikipedia.org/wiki/Hypertext

[3] https://dougengelbart.org/pubs/augment-3906-Framework.html

Note that [1] says 1960 for Xanadu and [2] says 1963 for the term hypertext. Go figure.

Edit: fixed typo


From:

> [1] https://en.wikipedia.org/wiki/Project_Xanadu

Wired magazine published an article called "The Curse of Xanadu", calling Project Xanadu "the longest-running vaporware story in the history of the computer industry".[3] The first attempt at implementation began in 1960, but it was not until 1998 that an incomplete implementation was released.

Ouch!


Under the international hypertext accords of 2010, anytime the wired article is referenced, the rebuttal must be referenced also:

https://web.archive.org/web/20001003011753/http://xanadu.com...


What’s this letter he keeps referring to?


Letter to the Editor he sent Wired.

Actually there have been several kicks at the can for rebuttals. Most recent ones are videos: Shorter Video: https://www.youtube.com/watch?v=-_-5cGEU9S0 Feature length: https://www.youtube.com/watch?v=ASgjSxNdDqI


> Even though hindsight is 20/20, I'm still surprised that the power of network effects didn't occur to him (or whoever the "we" refers to).

To unwrap that, the “power” of great efficiencies of scale, but also “weakness” from the lack of adaptability and resilience inherent to monocultures.

The jury is still out on which way the balance swings — especially because the former manifests in “typical events” while the latter manifests in “tail events”. Post pandemic disruptions of logistical supply chains and supply-chain vulnerabilities in software, I think folks seem to (or at least ought to) be far more apprehensive of blindly submitting to network effects.

In summary, it’s surprising to me that even in hindsight people seem so enamored with network effects to be blind to the cons.


Right. No one in 1963 was imagining billions of people with a computer. Watch "Mad Men" if you want to time-travel.


Back then the future was difficult to discern. I entered grad school in 1974 and at that time having a 300 baud modem (30 characters/second) at home was a luxury that only one of my friends could afford. I didn't have a personal computer.

I was creative back then, and I remember envisioning VR very similar to today's VR. After studying visual displays and how they were projected on screens for pilots in flight simulators I presented an idea to my computer graphics class that we could put small displays on a helmet one for each eye and have motion and orientation sensors on the helmet to simulate what a pilot would see in 3D as he looked around the virtual cockpit. I said we could even develop multiuser experiences in a big gymnasium with a virtual landscape displayed in each participant's helmet displays. My professor dismissed the ideas as fanciful, and I moved on to other ideas.

It was easy to imagine a future where lots of people owned personal computers and had modems that could be used to log into servers over dial-up lines; I remembered thinking that player verses player games would become a thing because the number of pairs of users online would grow much faster than the number of users [N(N-1) vs N]. But here, I was imagining a single multiuser server, not an internet.

I was aware of the ARPA NET and had even tried out the PLATO timesharing system. I had a copy of Ted Nelson's interesting book of rambling ideas titled Dream Machines where he talked about hypertext, Xanadu, etc. I was also a teaching assistant for a telecommunications course where we talked about the possibility of fiber to the home replacing analog phone lines and providing bandwidth that could replace video rentals for entertainment.

All of these hints foreshadowed the development of the internet, but I didn't see it and neither did any of my colleagues. The scale and importance of the internet just eluded me.

As late as 1989 when I started a software company, it wasn't clear that the company should have it's own domain name!


I have a hypothesis about this phenomenon of the software house monopolies, which is that wherever consistency of human interaction is beneficial, you get large concentrations of development, for reasons of human computer interaction being difficult and operating consistency and communication of skills network effects are created as a function of intrinsic human values propagation. Conceal the interface from direct user manipulation or the necessity for interaction for function, and individual or very small groups of artisans can be effective in extreme scale as is the case with much internet infrastructure and the Linux kernel. In the example of concentration of large development investment I'm thinking about, which is Microsoft Office, both kinds of network effects have been used to control and conflate different desires of monopoly, charades of open document formats and email protocol implementation (in this case the documentation is very good, practice however..) dancing between the lines of simpler competition and initiatives. Although the argument I'm effectively making turns out rather familiar to anyone who thinks of the turn of century software economy, I'm trying to replace economic theory with a suggestion that it's important to think about user applications for critical new infrastructure and protocols that can be developed simultaneously at comparable velocity alongside supportive infrastructure, instead of declaring standards and protocols that are left to capital markets to realize as products. Imagine if TBL had written a book recommendation and bartering spec as a backup for the paperless proposition of hypertext?

Edit: fixed autocomplete error, removed "even" from last sentence.


Academics rarely understand industry, it's just not what they're typically interested in. I got taught a lot of computer science theory in college, but at no point was I taught how commercial build systems work (unless you count make), and forget anything regarding scalability or manufacturing. Was actually quite frustrating for me when senior year rolled around and I still didn't fully understand how any of the theory I'd been shoving into my brain was actually used to create, say, a car or aircraft or even a calculator. At best I got a cursory series of lectures in scrum and assignments that required the use of Eclipse, and that was it. Even in the hardware-oriented EE side of the curriculum it was just working with basic prototype boards, nothing about the business side or mass production. It's not that my education was useless (far from it) but I wasn't taught how to "build" anything beyond toy projects until my first job.

Granted, the people who had the knowledge to teach said methods were probably too busy making comparative bank in private industry, or perhaps were teaching business courses. Most academics are researchers, and researchers tend to be explorers, not builders.


If this interests you I recommend "What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry" by John Markoff and "All Watched Over By Machines of Loving Grace" by the BBC and Adam Curtis

https://en.wikipedia.org/wiki/What_the_Dormouse_Said

https://en.wikipedia.org/wiki/All_Watched_Over_by_Machines_o...


Howard Rheingold's book "Tools For Thought" profiles Ted (along with Engelbart, Licklider, and other familiar names). And you can read it for free nowadays from the author's own site. <http://www.rheingold.com/texts/tft/index.html#index>


Thanks, this is fantastic. I see this and raise you "Seattle Mystic Alfred M. Hubbard: Inventor, Bootlegger and Psychedelic Pioneer" by Brad Holden

https://www.goodreads.com/en/book/show/57536256-seattle-myst...

Alfred M. Hubbard had an influence on Silicon Valley that is not well known.

From my copy of the book, page 73: "Meanwhile, as Hubbard crisscrossed the globe, he began hearing murmurs about a group of engineers and scientists in San Francisco who were expressing a strong interest in experiencing one of his guided trips [LSD]. They were located in a stretch of land that would eventually become known as Silicon Valley, and being a fellow inventor himself, the good Captain had every intention of paying them a visit"


> How the Sixties Counterculture Shaped the Personal Computer Industry

.. and how it didn't. Confessing I haven't actually read the book, but I've certainly read reviews & other articles about it.

Hippies did not build the Internet. Or the personal computer industry. They were around and some jumped on it early, but it's clickbait to say they built it. The TV series "Halt and Catch Fire" is actually a better picture of that era (Season One, at least).


There's a difference between the words 'shaped' and 'built'. I'm sure Markoff chose his words carefully.

'shaped': determine the nature of; have a great influence on

"his childhood was shaped by a loving relationship with his elder brother"

'built': past and past participle of build.

'build': construct (something) by putting parts or material together

"the ironworks were built in 1736"

Or in the context of the topic, in the early 60's Silicon Valley engineers began experimenting with LSD, Eastern philosophy and Science Fiction and their influence shaped views of what was possible with microchip tech and information theory.

I've heard this rejoinder before that 'hippies' didn't build the internet, as if anyone who knew anything said that. It seems mostly driven by disdain for boomer hippies (who were probably the parents of those saying it)


So you haven't read it either? Let's argue about a book neither of us have read.

Hippies didn't "shape" the Internet, either. Rather, there was a general idea of openness which almost everyone shared, to some degree -- even the faculty and grad students and researchers who actually did build it. The telephone company people and the IBM'ers did not share it, and that's why their vision did not win out.


Of course I read it. What makes you think I have it. I'm pretty sure the problem we are having is that we have different definitions of 'hippie'


> What makes you think I have it

I assume that's "haven't" ?

For sure. "Hippie" to me is not broadly defined. Hippies rejected society and regular jobs (and education). There were hippie posers and used-to-be's, who might have been the users on The Well.


TIL about Douglas Engelbart, such a visionary that even Occam's razor says he ought to be a time traveller.


After watching The Mother of All Demos at <https://www.dougengelbart.org/firsts/dougs-1968-demo.html>, I suggest you also read this:

https://books.google.com/books?id=6f8VqnZaPQwC&pg=PA97

It’s an anonymized account (from 1982) of what happened to Douglas Engelbart (anonymized as “Vision” Stanley).


I like learning from the past, when they were letting people download papers for free due to covid I got enough to keep me busy for a long time — just need to find the time to do something with them because working outside tech means I have very little time for such things. Read some here and there, bother industry experts here, fun times.

There’s a lot of good ideas buried in the past out there that never took off because people adopted some other way due to that being what the textbooks teach or whatever. Most of what I look at isn’t applicable to anything I do, unless I were to find a pile of money so I could write a compiler, but I find it interesting and like messing with language grammars and compiler theory. Someday I might even get past the AST stage with my playing around.



I’m sad that we no longer have any Ted Nelson’s in the world of tech. There’s simply too much money now for smart people to waste their time on “interesting” things.


First off, we still have THE Ted Nelson! He's old but he ain't dead yet.

(He might even read this thread!?)

- - - -

There are some modern day Ted Nelsons:

Sean Mcdirmid comes to mind. (He's on HN. Dude, didn't you used to have a blog?)

https://www.youtube.com/user/seanmcdirmid

https://confengine.com/user/sean-mcdirmid

Or Jonathan Edwards https://www.subtext-lang.org/AboutMe.htm

There's the folks behind Enso/Luna https://enso.org/ among others. Jef Raskin's son Asa is still doing things https://en.wikipedia.org/wiki/Aza_Raskin


Correct. Engelbart IS dead, RIP.

If you get a chance to see Nelson, do it. You never know if you'll get another chance.


There's a whole subculture of super cool developers who take inspiration from the whole Ted Nelson, Alan Kay/PARC, Douglas Engelbart, Bret Victor branch of software history; ie, visionaries.

Two are listed below, but there are many more. Nicky case is one of the most well known ones, Bret Victor himself is very well known and very active. There're also a lot of lesser known people in the space,

- Patrick Dubroy: wrote Ohm, makes cool stuff

- @azlenelza on Twitter

- @codexeditor on Twitter: He's building by far the most professional, serious and very very inventive digitised text... system? I have no idea what to call it. It's far far beyond what Roam, Obsidian etc can do. I think Ted would love to have a conversation with this guy. It goes deep in all directions.

- Andy Matuschak is worth mentioning. Goes deep into learning and memory research, and building tools around this.

- Tudor Girba @girba on twitter, makes moldable software environments: Tl;dr, tools to make custom IDEs in seconds to minutes that fit your current task perfectly. Very interesting stuff built on Pharo smalltalk

- Omar @rsnous on Twitter, posts thought experiments about weird interactions with computers multiple times per week. Sometimes cool ideas come out of it :

Of course not all of these people are or will be visionaries of the same level as Ted, but these are all people working to experiment with the status quo, or step out of it completely.


Fitting with your list, I would add the Croquet team, who are continuing Engelbart's legacy of collaborative multi-user software. Some of them are on twitter as well.


Oh we definitely still have Ted Nelsons, what we don't have is a way to find them. Right now people are lost deep in some social media private groups and the surface level information is all war, disease and drugs.


Not everyone is working at whichever shop offers the highest salary. There are many smart people who are willing to take a pay cut to work on interesting problems. There are some smart people who are already financially secure so that they could afford a pay cut.


If I may nominate myself, take a look at my work, why not?


After a cursory look I found it interesting. Homoiconocity of assembly language isn't something I had thought of--although I have been working on a higher level S-expression based assembly that is lowered to more concrete assembly for Scheme86 microcoded emulation.

I'm no genius for sure, but even when I do sometimes find myself ahead of the curve, I fall into Hamlet's dilemma:

"...Thus conscience does make cowards of us all, and thus the native hue of resolution is sicklied o'er with the pale cast of Thought, and enterprises of great pith and moment with this regard their currents turn awry and lose the name of action." Wm. Shakespeare, Hamlet, Act III, Scene I

I wish I could explain why the best people and ideas are ignored, or bullied away, but I am at a loss. Best wishes.


I appreciate the praise. The base of much of my work involves removing what we currently consider to be text from computing, on both ends. An assembler with its language is inferior to my machine code development tool with an interactive interface filled with redundant information; on the other end, I don't believe we should store text as characters, and I'm currently making progress on that.

I found no way to contact the author of this, as an example, but it's similar, except I'm making some progress: https://news.ycombinator.com/item?id=32495133

I've largely given up on promoting my novel work here, as it never makes the front page, and none of my work ever gets any interesting comments whenever it does.


> I don't believe we should store text as characters

Could you briefly elaborate?

[edit]

I read the article on your HN link. I didn't quite "get it". Program text gets in the way, let's replace it with [?]. Dr. Strandh at U. of Bordeaux, et al., have argued that we shouldn't store programs as text files in an operating system--is that what you are advocating? An image based system, for example?

> none of my work ever gets any interesting comments whenever it does

Sorry for my part in that, most people don't find me interesting. :) My work doesn't get much attention either, even if someone else happens upon it:

https://www.reddit.com/r/Common_Lisp/comments/ezruuc/tovero_...

----

I don't know why I keep quoting Shakespeare this week, I mostly slept through the Shakespeare module in high school English, but I guess some filtered through and maybe WRT promotion on the Internet:

"Had I so lavish of my presence been, so common-hackneyed in the eyes of men, so stale and cheap to vulgar company, opinion, that did help me to the crown, had still kept loyal to possession and left me in reputeless banishment, a fellow of no mark nor likelihood." Henry IV, Part 1, Act 3, Scene 2

It seems to me like for the most part only the best connected, ruthless, garrulous, overly vocal people get their ideas heard. I've tried business development orgs (including personal connections), directly contacting relevant researchers or sources of funding, on and on. Mostly, the advice has been to throw the spaghetti on the wall (put up a Github project, e.g.) and see what sticks (yell on Twitter, try to get some users). Even one of the founders of this site had difficulty marketing his idea for a new variant of Lisp, you know.


Yes, I can elaborate: http://verisimilitudes.net/2018-06-06

This is my current progress: http://verisimilitudes.net/2022-06-13

However, that was somewhat a waste of time, so I also wrote this reflection on it: http://verisimilitudes.net/2022-06-17

> Sorry for my part in that, most people don't find me interesting.

I didn't intend the remark as an insult.

It's interesting to mention Shakespeare so much. I'm reading The Merchant of Venice lately.

> It seems to me like for the most part only the best connected, ruthless, garrulous, overly vocal people get their ideas heard.

Yes.

> I've tried business development orgs (including personal connections), directly contacting relevant researchers or sources of funding, on and on.

I tried contacting VPRI and similar organizations, but to silence.


The Internet is an easy target for blame, but in fact in the past month I found a number of people whose thinking (at least on some specific topic) was near to mine through Reddit discussions (and now you, it seems):

https://www.github.com/kaveh808

https://gitlab.com/flatwhatson/guile-prescheme

If you are interested in the nature of machine code and assembly language I would recommend at least looking at Scheme86:

https://dspace.mit.edu/handle/1721.1/6042

It's like a Scheme interpreter running on hardware, and the latest successor to Steele/Sussman's Scheme-on-a-chip--I'm working on microcoding it with my inferior S-assembly. :) I didn't think you were being insulting--my last refuge in an increasingly humorless world appears to be self-deprecating humor.

Have you reached out to John Cowan, who is working on the R7RS Large Scheme standard, and is interested in topics like auxiliary human language as well as computer language and their representation? I'm not serious enough, I'm afraid, for the Scheme community (see above)--but they might take you more seriously:

https://en.wikipedia.org/wiki/John_W._Cowan

Another Scheme person, Jonathan Rees, was (or still may be) at a forward-thinking institute called "Ronin Institute"--I'm not sure what their process is for onboarding scholars but they might be interested in your work:

https://ronininstitute.org

Elmer Hanks wrote a forward thinking book many years ago called "Enterprises of Great Pith and Moment: A Proposal for a Universal Human Language" (my Shakespeare quote), about his machine readable, sign-language friendly, auxiliary language, and is worth a look in your domain of study.

I have been meaning to re-install Whitaker's Words which I used frequently in my own study of Latin, but lost when I upgraded my OS. You might have heard of Ido, an auxiliary language designed by Louis Couturat, a French logician, and the successor to Esperanto. It's almost completely regular, and I thought it might be a start for a more human-language neutral Scheme implementation (it is a Eurocentric language, so not completely neutral, unfortunately). My middle-school English teacher in 1981 pointed at the Esperanto booth in the language arts faire we took a field trip to and said, "I don't know why that booth is always so disappointingly unattended." I guess "ain't much changed", right?

I recommend "Asimov's Guide to Shakespeare" if you haven't read it:

https://en.wikipedia.org/wiki/Asimov%27s_Guide_to_Shakespear...

and maybe we should both just continue to choose "to be" rather than "not to be".

[edit]

I was talking this over with my wife, and after reading your links, I still don't completely understand the overall advantage of coding word stems rather than characters. It did remind me of the first computer I owned, a Sinclair ZX-81 I built from a kit. Its Basic keywords, FOR, GOSUB, RETURN were each represented as a bytecode--and I think you could enter with a special key modifier with one key-chord. Is this like what you are saying?


I'm aware of the Scheme chips. I chose machine language because it doesn't need many names for code, and thus was suitable as a starting point for my work on textless programming.

I've not reached out to John Cowan but think I might, despite my dislike of Unicode. I'll also look into this Ronin Institute. I'll keep the book in mind. I appreciate the help.

As I explain in the last linked article, a lot of these artificial languages retard language work, rather than help it. I expect to target real languages with most of my time now.

Think of Elision like MIDI, but for human language. Text is stored uniformly as indices into a dictionary, rather than as a sequence of characters with interspersed control codes.


Thank you for a civil, intelligent conversation.

I had never thought about the "asymmetry" of putting control codes with the character codes in ASCII--it does seem illogical now that you point it out. :) Languages like Ido aren't completely "artificial", and I certainly think Couturat didn't fail in some sense--my irrational issue with Ido, despite its numerous strengths, is that words with the same part of speech end in the same letter(s), making it monotonous. But as my wife pointed out, that regularity makes it easier for computers to parse.


> my irrational issue with Ido, despite its numerous strengths, is that words with the same part of speech end in the same letter(s), making it monotonous. But as my wife pointed out, that regularity makes it easier for computers to parse.

For people too. And isn't that exactly how many natural languages work? Verbs -- someone doeS something -- all end with an 's'[1] in English: He/she/it writes, reads, encodes, builds, blathers, creates, excels, triumphs, loses, sucks...

___

[1]: Admittedly only in the present tense third-person singular, but that's something. Arguably, more of that would make the language easier to parse and learn.


It's no issue.

> I had never thought about the "asymmetry" of putting control codes with the character codes in ASCII--it does seem illogical now that you point it out.

Yes, this is why I call them insidious.

The issue with constructed languages is that they simply don't have the beauty of something such as Latin, and never can, due to their very nature.

Anyway, feel free to send me an e-mail at some point, if we're to continue our discussion at some later time.


I was listening to a podcast a few years back, I think it was fortran related, and in any case, the guys talking were having fun putting their "old farts complaining about kids and their scateboards" hat on.

At some point the conversation went a bit like this:

- Do you know what else kids these days don't do?

- What?

- They don't declare all their variables at the top.

- What?

- Yep

- They don't?

- Nope

- W ... where do they declare them then?

- Right before they use them.

- No!

- I shit you not.

- Wtf

I remember laughing hard, and realising I'm also becoming an old fart who complains about kids and their scateboards and shudders at the thought of variable declarations sprinkled thoughout code. :p


Being very much a kid (at heart at least), I'll wager the question: what's the case against declaring variables as you need them? Also, at the top of which scope are you supposed to declare variables, function scope, class scope, ... somewhere else?


I won't make the case against but I will make the case for declaring them at the top. Think of a recipe letting you know that you need eggs and flour before telling you how to mix it. It lets you get all of your ingredients on the counter before you go reading the instructions. Having the variables up front lets you prepare your mind for the state before reasoning about the code.


I'll make a case against. Declaring all your variables up-front means that _all_ variables can be altered from _every_ place in the function, making it a lot harder to analyze/find data dependencies/etc.

Minimizing the scope of variables decreases the amount of scanning/grokking you have to do to find out how they are related. (Making them constant too also helps a lot as well... you know it won't change thereafter)


That sounds good in theory, but in practice whenever I run into code that declares all of the variables up front, I can't tell what those variables are for until I read the body of the function anyway. And often those variables end up being reused, which makes it even more confusing.


> Being very much a kid (at heart at least), I'll wager the question: what's the case against declaring variables as you need them?

There's not really one, it's just a joke about how some languages used to work.

> Also, at the top of which scope are you supposed to declare variables, function scope, class scope, ... somewhere else?

In C, you used to have to declare all local variables at the top of the function rather than throughout the body.


What? you want your compiler to backtrack and recalculate all the stack offsets? Just say how many bytes a call needs up front! Kids today, acting like computer time is free.


> In C, you used to have to declare all local variables at the top of the function rather than throughout the body.

This is true, but support for block-scoped (as opposed to function-scoped) variable declarations was added to C a long time ago. Every version of ANSI C supports them, and this stack exchange answer, https://softwareengineering.stackexchange.com/a/300274/19196, suggests that support was added to K&R C since at least 1978.


Hmm, interesting! I might be remembering wrong, but I remember within the last decade being told by a coworker that it was required for proper Windows support, but maybe that was just an old version of Visual Studio from XP or something that no longer needs to be supported.


And C99 allows you to declare variables anywhere in the function (much like C++).


I remember that the reason my coworker had said that they couldn't use local variables declared anywhere in functions for their Windows code was due to not supporting C99, but maybe that was just due to some external constraint rather than a Windows-specific one (like some customer needing them to support pre-C99 code or something).


Even C89 supports block-level variable declarations. I think many C programmers are unaware of this, though, and mistakenly think that declarations are function-level only.


Not an answer but interesting nonetheless: In the nand2tetris course (where you build a computer from logic gates/flip-flops to the OS/compiler), the high level language you implement has this as a requirement, mainly to make the compiler easier to write.


You declare variables at the top of the function, so compiler can calculate space reserved for those variables and emit `sub esp, N`.


[flagged]


Really not nice of you to post this without linking Ted’s response.

To anyone interested here it is http://web.archive.org/web/20001003011753/http://xanadu.com....


Wow, that was a full-on riposte.


The quotes you chose seem suggest you were trying to insinuate Nelson was grooming those kids. Maybe that's what the author was trying to insinuate as well. Facts supporting this insinuation are, unfortunately, absent.


way of a stretch and not warranted that I know of..


Fortunately.


I was a kid at some WELL parties - disingenuous reporters, Hollywood opportunists and Ivy Elites are not unknown to the artists, troopers and perverts that actually did lots of interesting things, with society dog-piling in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: