Hacker News new | past | comments | ask | show | jobs | submit login

I believe the appropriate term for someone like this is "architecture astronaut"

Edit: Apparently this is where I heard the term: http://www.joelonsoftware.com/articles/fog0000000018.html

Obviously this was written from a software engineering perspective, but it seems at least marginally fitting, as the topic in question is literally computer architecture.




"Researcher" is also an appropriate term that also happens to have the virtue of not being pejorative.

Even though they're not popular in the business world, we do also need people who will change their goal when confronted with new ideas, rather than just the ones who filter ideas based on their service to a goal.

We even gave a Turing award someone like that a few years ago.


"Researcher" is appropriate for someone who actually produces academic output. It may not be immediately practical, but it consists of new ideas that have been formalized and tested sufficiently to pass peer review of other researchers.

Has the Mill architecture done that? Are there any actual papers about it? Anything beyond marketing fluff?

It seems like there are some ideas behind the Mill that seem interesting when you hear about them, but I haven't seen anything rigorous enough to even be reviewable.

And it doesn't look like that is the intent, either. Rather, from all appearance, it looks like they are trying to drum up interest for investment.


This may just be a semantic quibble, but I'd say that someone that produces academic output (usually in the form of original research) is an "academic", and the academy has it's own peculiar rules for deciding which output has merit.

The Mill stuff is just an example of research taking place outside of the academy.


If you are not producing any academic output, are not producing any commercial output, and are not even producing any patents, all you are producing is high-level handwavy talks, how do you distinguish that from a charlatan?

Ideas in a vacuum are not worth much. I have some random ideas for systems that I think would work better than current systems that people use all the time; stuff I would like to get to some time to develop into something real. However, without actually either rigorously formalizing those ideas and having them reviewed against the current literature, or producing a shipping implementation that demonstrates their feasibility empirically, they're pretty much worth the cost of the paper they are (not) written on.


(sent from a mobile; please pardon the typos)

Detrminatuon of charlatanism and crackpottery is also a function of past performance; no one considers that Shinichi Mochizuki’s proof of the ABC Conjecture is the product of a crackpot, even though people are still working to understand it nearly 3 years hence.

According to the Financial Times:

“Godard has done eleven compilers, four instruction set architectures, an operating system, an object database, and the odd application or two. In previous lives he has been founder, owner, or CEO of companies through several hundred employees, both technical and as diverse as a small chain of Country Music nightclubs and an importer of antique luxury automobiles. Despite having taught Computer Science at the graduate and post-doctorate levels, he has no degrees and has never taken a course in Computer Science.”[1]

Mill may turn out to be another Xanadu. On the other hand, the nice thing about computers is that you can use them to simulate anything, including a better computer (with respect to X), so it's not crazy to think that Mill may have something serious to offer.

Also, Godard sounds a lot like this this guy: http://en.wikipedia.org/wiki/Robert_S._Barton who designed this http://en.wikipedia.org/wiki/Burroughs_large_systems.

[1] http://www.ftpress.com/authors/bio/de5f140d-e5bf-4e83-b020-1...


That is usually a strong crackpot indicator.


It might not work as well as they hope for but there is nothing crackpottery about the Mill so far.

Or, to put it differently, there might be crackpottery in some of the things they haven't revealed yet. Who knows.

(Naturally, this says nothing about whether it will ever actually be built or whether any actual silicon will actually run fast -- there are so many other reasons besides the purely architectural reasons why it might not. Chips are hard.)

A concrete example of something the Mill does better (at least on paper and in a simulator): it does spills and reloads of values differently, in a way that doesn't go through the data caches. The same mechanism also handles local values in many cases. If they don't pollute the data caches, you can get by with smaller and/or higher-latency caches and still get the same (or higher) performance. The loads that are still there on the Mill are those that really have to be there (and are there in the compiler's intermediate representation, before register allocation).

There are also some innovations in the instruction encoding: they can put immediates inline in the instructions, like on the x86 and other Old Skool CISC architectures. They can even do that for quite large immediates. That's better than having to spend several instructions "building" the constants or having to load them at runtime from a table, as the RISC architectures do. They do this with a very regular encoding while keeping the encoding very tight (and variable-width). The way they pack many operations into each instruction means that even if the instructions are byte-aligned, the individual operations don't have to be. If the operations only need 7 bits for their encodings, then that's what you get. (The bit length is fixed per "slot" in the encoding so it's not as variable -- and slow -- as it sounds.)

Instruction decoding and execution is pipelined: the first part of the instruction is decoded first and starts executing while the rest of the instruction gets decoded. This ties in with the way call/return/branch is handled so the last part of the instruction effectively overlaps its execution with decoding and execution of the first part of the instruction at the new site. This works for loops as well so the end of the loop executes overlapped with the start of the loop and so that the prolog/epilog overlaps with the body. You can think of it as a very strong form of branch delay slots -- or as branch delay slots on acid. This is combined with an innovation that allows vectorized loops with very cheap prologs/epilogs so you can opportunistically vectorize practically all your loops. If a loop only executes twice (or thereabouts) you break even.


> I believe the appropriate term for someone like this is "architecture astronaut"

This is a much better term for ADHD.


Please. I've struggled with ADHD for most of my life, and while it probably does have an effect on both my work and my code, this is not the effect.

Don't make broad generalizations based on the pop culture definition of mental illnesses.


I was speaking from experience. I have it too.

Edit: and even if I wasn't it wouldn't make a difference - flitting from one grand project to the next without actually completing any of them is a clear diagnostic marker anyone can read in a book.


Edit 2: Questions 1 and 2 from the Adult ADHD Self Reporting Scale[0] (from Harvard, endorsed by the WHO) -

    1. How often do you have trouble wrapping up the final details of a project, once the challenging parts have been done?
    2. How often do you have difficulty getting things in order when you have to do a task that requires organization?


    [0]: http://www.hcp.med.harvard.edu/ncs/ftpdir/adhd/18Q_ASRS_English.pdf


Architecture astronaut isn't a term for starting projects and not finishing them. It's a term for people who overengineer or overdesign their system.

Overengineering can lead to not finishing a project, but in the case of someone with ADHD that's unlikely to be the only issue.

Overengineering/Overdesigning isn't a problem I have, and is probably the single biggest thing I use to judge whether or not someone is a bad programmer, which is why what you said bothered me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: