Hacker News new | past | comments | ask | show | jobs | submit login

> I mean this: a system is deterministic in practice if you can actually predict its outputs based on its inputs with reasonable amount of effort.

You're changing the meaning of determinism and bringing it closer to rationalism. Your introduction of the element of theory to the mix is a violation of Occam's Razor. We already have philosophy and a word that covers what you want it to cover. Theory is a component of rationalism, not of determinism. If you need theory to understand a system, then it already has elements of non-determinism. Theory is what you need to make sense of the non-deterministic. Because theory deals with uncertainty, you wouldn't need any validation of your hypotheses if the system was truly deterministic. One observation would be enough to ascertain the whole thing.

A considered study of history would reveal where you're going wrong here. The Greeks invented empiricism and philosophy and science while the Egyptians never got there despite only being a tiny distance away. They wanted to distance themselves from theological frames. Despite all this, the Egyptians built pyramids. They understood determinism. They could not understand science. Determinism made them good engineers. Engineering is not science.

> A system is deterministic in theory if it's deterministic, but actually predicting its outputs requires absurd amount of computation.

Now you're starting to dip into computational complexity territory. Predicting outputs is the domain that the halting problem puts a backstop to.

To prove to you that a brain is better than a computer, all I have to do is state the obvious, humans make algorithms, not the other way around. Sure, there are programs that will devise algorithms, but humans have to understand the domain before they can make computers do their work for them.

Your examples of Lorenz systems and weather do not change things at all. Humans have a better understanding of weather than computers do. In fact, humans have an entire body of theory that attempts to make sense of why such things have difficult-to-determine causation, chaos theory. Humans devised it, not computers. And they devised it using the tools of epistemology, working out the details of justification of knowledge via seeking rigor, not in the scientific method of dreaming up hypotheses based on empirical analysis and testing them. Chaos theory is more math than science.

In other real ways, humans outclass other mammals, even though we largely share the same macro brain structures. We keep monkeys in cages, monkeys do not keep us in cages.

I'm not sure how much more I have to state the obvious here. You seem to be the one seeking out a special domain in which the rules don't apply, one in which computers are wholly analogous to brains. It may, and this is speculation, be true in degree rather than kind.

But the halting problem itself illustrates a domain in which humans are able to reason past, whereas we cannot possibly program a computer to do it. Computers cannot program themselves to find gradations of the halting problem. Humans have to write algo-generating algos. The pace of comp-sci progress at the moment is fully dictated by human ingenuity, and if you think about it, any change in this means that the singularity is upon us.

I suspect that we'll never be able to get computers to fully take the place of brains. There will always be domains where brains are better than algos. Prove me wrong. Humans are capable of wanting things, even the best machine learning algos at the moment struggle with finding purpose. Finding purpose is something even the most basic virus can achieve. And we can't even determine whether virii are alive or not.

And that's super basic. How much more self-awareness do you think algos can find before running into hard physical limits? The computational and memory concerns are huge. I predict the hard limit of engineered systems will be well below full self-awareness. Instead we'll have to create biological systems to carry on progress. Dogs will get smarter, mice will get smarter, apes will eventually start doing things that humans do now, once we can fit our ethics around it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: