Hacker News new | past | comments | ask | show | jobs | submit | tempo33's comments login

Then what is empirically wrong with trio programming? If the pair programming wonks actually have data to show that it is a net benefit, then they surely must have evidence showing that 2 is a good maximum on return.

Like to this day, I have never seen any reproducible strong evidence provided that pair programming is beneficial. And I don't know if I would call the occasional ad-hoc (unstructured) "pairing" anything other than what a normal person would call it: "two people collaborating like normal fucking human beings."

But if there is evidence that 2 is better than 1, why is 3 or 4 not even better?


Have you personally tried it? It's great!


So call it collaborative programming. People tend to work well with someone checking their work, and more than two is limited by human communication. You're creating a strawman that I'm saying "The more, the merrier". I'm not.


> We think Tree Notation might be the trick to getting the semantic web vision realized.

I'm not sure what Tree Notation fundamentally brings to the table that RDF and microformats, etc do not..

Have you read: https://people.well.com/user/doctorow/metacrap.htm

How does Tree Notation address anything listed there?

The semantic web has never been about technological limitations or tooling problems. And ever if it were, that is solving the simple problem.


That's a fantastic, fantastic link. Thanks for sharing. Very informative. I may have read it before, but don't see it in my notes.

> How does Tree Notation address anything listed there?

The 2 things that have changed since 2001:

- git - Tree Notation

A very powerful combination. In two ways. First as a collaborative database system (https://treenotation.org/treeBase/). Second as collaborative grammars (done via Github, gitlab, or any gitX).

1. "People lie". Complexity can be measured directly in Tree Notation. Complexity is where corruption hides. Tree Notation + git (blaming, etc), makes it much harder to lie.

2. "People are lazy": Tree Notation requires the fewest keystrokes (or pen/pencil strokes--it works great on paper too! very important in clinical settings. for instance, in some countries, 80% of hospitals have no digital medical records at all--I was recently told today!). Tree Notation and our grammar language gives you type checking, autocomplete, autocorrections, and more.

3. "People are stupid": see response to #2.

4. "Mission: Impossible -- know thyself". I'm not sure the problem here. The semantic web shouldn't be about forcing some model of behavior on people.

5. "Schemas aren't neutral". Tree Notation makes this very simple: just fork a grammar! We are carefully designing our Grammar language so you can simply do a file concat of N files to create a new grammar. We are making it as easy as possible to build, fork, and combine new grammars.

6. "Metrics influence results". In our database of 10k notations and computer languages, I quickly realized that you can't bucket things so cleanly. Terms like "a functional language" an "imperative language" are mildly useful, but not so precise. Instead, we now have over 1K columns. Tree Notation/TreeBase/Grammars make this very easy. Amongst other things, this will allow for better precision medicine.

7. "There's more than one way to describe something". We agree! It's so easy to fork a Grammar if you think you can do it better. Let the market decide. We have this of we talk about of the "World Wide Tree". But at least one person thinks we should call it the "World Wide Forest". I think they may be right.

FWIW, I pitched Tree Notation for the semantic web to w3c in 2017 but never head back. This is a reminder that I should ping them again.

Thanks again for the link. A very good read and I've long been a fan of CD's work.


A Z80 running an implementation of lisp is not a lisp machine, sorry. Not an apt comparison.

You might want to lookup what a real lisp machine is.


Because while XML has types and schemas, the actual implementation and ergonomics is absolute garbage. Those things matter to intelligent people.


No. You can evaluate compiler speed, memory use, quality of error output, generated code quality, ie most of what anyone cares about without knowing what's in it.


On what code? Any code you pick is an arbitrary benchmark. There are infinite number of context free grammars you could feed the compiler and for any of them in might make some unknown optimization X that is superior to all other compilers. The number of programs that you can write are infinite therefore you cannot say for certain that is better or worse for all programs, or in other words, objectively say it is terrible.


The idea that you need to test with infinite inputs to be "objective" is preposterous. You can in fact test with one input and the result is still objective, and you can conclude that it is objectively terrible or not on that one input. In reality, a suite of real world tests within your application domain provides actual useful information -- those aren't arbitrary benchmarks if that's the actual code you care about.


No. All optimizations have tradeoffs, if you don't know what it's doing then you cannot know the tradeoffs made. Empirical black-box testing works to a degree, past which it only gives false assurance, as it would in this case.


But you can know what's doing. Just have it output the .s/.asm file

You can also look at the flags and the different optimizations you can turn on and off.

MSVC had some issues with C++ compatibility (a long time ago) but overall it is not bad. A compiler that can compile a whole operating system probably has most of its issues ironed out.


> But you can know what's doing. Just have it output the .s/.asm file

That's still black box testing. There ~may be~ almost certainly is hidden state that could mean that the optimization you expect to be applied is not (or an optimization is misapplied) under certain circumstances, and you can't/won't discover this in advance due to those conditions.

> A compiler that can compile a whole operating system probably has most of its issues ironed out.

History has shown this to be mostly wrong, given how many critical bugs have been discovered in GCC since it was capable of compiling linux


Pedantically its WWVB, the LW digital code version, not WWV the SW station. WWV/H/B were at risk for defunding, but their funding was renewed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: