Hacker News new | past | comments | ask | show | jobs | submit login
Ernst Dickmanns, a German scientist who developed self-driving cars in the 1980s (politico.eu)
196 points by omnibrain on July 24, 2018 | hide | past | favorite | 132 comments



A professor of mine who worked on the project as a PhD student once told us about the project: According to him they had to use the heavily armored version of the car, intended to protect presidents, and strip out all the amour, because only this car platform could carry the enormous weight of the computers at the time.


The first version was basically a bus loaded with 486's .


The original CMU Navlab was like that. Three racks of Sun workstation in the back of a van, and a generator to provide power.[1]

By the early 1990s, they had one that more or less worked, a Pontiac Firebird with self-driving comparable to Tesla's today.

[1] https://en.wikipedia.org/wiki/Navlab


Then it couldn‘t possibly have been in the 1980s. The 80486 didn‘t exist back then. The 80386 exists since the mid 80s but would have been very expensive.


They must have needed huge batteries - or could the bus engine power a large enough inverter to power all the 486s?


why an inverter? I don't think DC-AC-DC is more efficient then DC-DC.


The bus alternator produces up to 80 amps at 20 volts - while the computer needs 12, 5, and 3.3 volts, in the 1-20 amp range. Using a common inverter and a common power supply is the cheapest way to convert the voltage.


Bus altenator is 24VDC? Plus charging voltage so a 27.6v output?


My guess would be a big gas generator in the back of the bus.


The Prometheus project is one of the most interesting projects almost nobody seems to know about. It is incredibly impressive and one of the most memorable and interesting interactions I've ever had on HN was with the user sokoloff, that recounted his time being an intern for the PROMETHEUS project:

https://news.ycombinator.com/item?id=10328687


One of these cars is displayed in German museum in Munich (Deutsches Museum Verkehrszentrum). I visit it every time when I go there. It’s very unspectacular big gray Mercedes loaded with old computers. Back then it was a huge improvement from 8 MHz 68k processors. We have now a huge improvement from 100 MHz to these in GHz range operating GPUs. Is it enough to solve this problem? Or will it take another 30 years?


I was recently reading up on GM's firebird concept/research cars from the 50s https://en.m.wikipedia.org/wiki/General_Motors_Firebird

One of the features envisioned at the time:

> It also featured a sophisticated guidance system intended for use with "the highway of the future," where an electrical wire embedded in the roadway would send signals that would help guide future cars and avoid accidents.

So the idea has been around for a long time, approached from different directions.


The idea is as old as a car because with a horse and cart you could doze off and trust the horse on usual routes.

I wouldn't be surprised if at some point someone tried to train animals to do the driving and achieved modest success.


> I wouldn't be surprised if at some point someone tried to train animals to do the driving and achieved modest success.

Somewhat relevant (animals, real time) but I think it failed:

https://en.m.wikipedia.org/wiki/Project_Pigeon


Good point. I can't find it right now, but I seem to recall some experiments with a mouse/rat in a ball controlling a robot based on stimuli (LEDs or video).

There's also this: https://www.technologyreview.com/s/401756/rat-brained-robot/ Maybe someday horse neurons will control our cars!


https://www.smithsonianmag.com/smart-news/in-new-zealand-dog...

This is the first link I found... I could have sworn remembering an article about them teaching sheep dogs to drive their masters home after a late night at the pub.


There were two dogs in New Zealand trained to drive a car a few years ago. (All the training and testing was on a closed track of course.)



Obviously. But lazy buggers won't drive me cheap enough. ;-)


It surprises many non techies I talk to (and techies too) to learn that AI hasn't really come very far in a few decades. We're a bit faster (by orders of magnitude to be sure), but AI "summers" are characterized primarily by it being a bit cheaper to put old limited things in products, and not by some abstract march of tech progress.

Given Moore's law, I'd be surprised if we ever see another AI summer after the current one peters out.


In the 60s, there was a rather famous project where scientists thought that solving image recognition will take them few months at most. It's 2018 and the best algorithms on the planet will answer with 99% confidence that a sofa in a zebra print is in fact, a zebra, unless they were very specifically trained against this scenario.

I feel this exact same thing is happening with autonomous cars - yes, it's possible to get them to be extremely good at recognising the road and surroundings - but the last few percent, those crucial few percent that make the technology actually usable, I don't see those happening for another 50 years at least.


My unpopular (to me, even) prediction is that wartime will be the catalyst that makes self-driving cars ubiquitous. In times of peace having a car that kills its occupant even 1/10,000 of the time is unacceptable. In times of war having a car (or more likely, truck/tanker/tank) with no occupant is a huge competitive advantage, because someone is actively trying to kill the occupant. Build something with even a 90% chance of crashing and you still win, because you can take people off the battlefield entirely while your opponent loses precious soldiers with every vehicle that's destroyed.

Then after the war, people's risk tolerances get reset because quibbling over a 1/10,000 chance of death seems ridiculous when people have been actively trying to kill you for the last 5 years and a countable percentage of your friends are now dead.

The way global politics is going we probably don't even have to wait 10 years for this.


You seem to have a war like WW2 in mind. But a war between big powers in the 21st century couldn't drag on for five years while you frantically work on new technology. Even without use of nukes, one side or the other is going to be flattened a lot faster than in the days when bombers used propellers.

Now, a new Cold War, that might push technological competition, although you won't get that "reset" of people's attitudes that you're after.


It sounds more like he has a war like Iraq and Afghanistan in mind. Imagine resupply convoys being driven autonomously, capable of launching drones for defense.

https://www.army-technology.com/features/feature77200/

"[...] every 55,702 barrels of fuel burned in Afghanistan by the US military forces corresponded to one casualty."


> It sounds more like he has a war like Iraq and Afghanistan in mind. Imagine [...]

So... that would make them exactly the wars in Iraq and Afghanistan. If that were the case we wouldn't have to imagine, we'd have the results in the field already.

So that war must be of a different kind. Not the kind where you already have technical superiority and have 0 incentive to develop it because you can already make truckloads of money by supplying current generation equipment to the front lines. It has to be a war where developing the new tech is the difference between your country existing or not a decade or more from now.

That's a special kind of war. It could be a cold war but it's unlikely to have one of those in the same way the one from the 20th century unfolded. And if that kind of "hot" war is the only one that can bring these improvements I'd rather stick to driving my own car and labeling my own photo library :).


I think you are half right. It would take wars exactly like Irag and Afghanistan. And here we are, testing autonomous vehicles. Perhaps I'm wrong, but my understanding was the current raft of autonomous technology was supported in it's early stages by DARPA, with their priorities set by what was going on in Iraq and Afghanistan.

Someone I know frequently relays an anecdote. They were designing a new military helicopter. One of the requirements was that if the pilot was injured the aircraft should be able to return to base and come to a hover. They were trying to figure out how to cut weight. My friend said, well, if we just remove the pilot and all the equipment needed to support them, we'll lose a lot of weight, and the vehicle will be more aerodynamic. His suggestion wasn't taken seriously. That was before Iraq/Afghanistan.

That project got canceled, my friend retired. I have another friend working at the same company. They are building an autonomous helicopter.

*Edit: ok looks like someone else already noticed this: https://news.ycombinator.com/item?id=17600849


What I mean is that it's not these wars that will bring you autonomous vehicles. It's war in general, the idea of using autonomous machines of any kind for war is very old, and so is the study of it. But Iraq has been a war zone for decades now (with intermezzos). And although some of the tech was there since before the Gulf War we still haven't progressed that far in ~30 years. This kind of war brings slightly accelerated incremental progress, evolution.

Something like a WW or a cold war where you question whether your city will be the next Hiroshima brings you a jump: the nuclear bomb, the ICBM, man in space and on the Moon, and so many other Sci-Fi tech. That's what I meant.

Yes, today we have slightly better autonomous vehicles than a decade ago but this is natural evolution and it relied on progress in so many other (not necessarily war driven) improvements: computers, electronics, etc.

I would rather not see the war that brings you the AI for autonomous machines.


I've seen a number of projects already for self-driving supply trucks, and even walkers from Boston Dynamics. There's a (probably highly scripted) one featured in a later episode of Top Gear, for example.


Well, you probably won't have to wait 10 years for another non-peer war like that.

I think he's envisaging a big peer war with mass mobilization ala WW1/WW2, but he imagines such a war would be like a big Iraq/Afghanistan. It wouldn't.


It seems like it might be feasible for a proxy war (like the one in Syria) to escalate significantly, to the point where multiple major powers are contributing manpower and materiel directly to the front lines; In such a case, it may be possible for an extended war to take place, while still incentivizing deployment of autonomous weaponized platforms - which are already in development today [1].

[1] https://en.wikipedia.org/wiki/Lethal_autonomous_weapon


I'm uncertain exactly what the next major war will be like or who the sides will be, but I was thinking of a war like Syria but in a developed country. Perhaps you'd have Jesusland vs. Union of Socialist States of America vs. the Neoconfederacy vs. Ecotopia vs. The Reconquista vs. Sovereign Citizens vs. drug cartels vs. Steel Glory vs. Greater Canada against the backdrop of Central American refugees in North America. Or Cosmopolitan London vs. Hail Britannia vs. Neo-Luddites vs. Scotland vs. Ireland in the British Isles. Or whatever the fissures are in Russia or China.

One observation about the Syrian Civil War (and Iraq and Afghanistan) is that large areas of the country became extremely dangerous, because everyone was fighting over them and you often couldn't see the adversary or know who you were fighting. Another is that supply lines were quite vulnerable; forces could hole up in a military base and be relatively safe, but to continue operations in the field, they needed food/ammo/fuel, all of which needed to be transported at significant risk. A third was that whichever force brought security to a region and stopped the fighting there often won political power, because a majority of people don't care who rules them, they just want to not die.

Drones + self-driving supply lines would allow a belligerent to fortify & disperse their own industrial and operations base, well out of contested zones, and then project power at zero risk to their own lives into disputed areas. If the drones are smart enough (i.e. minimize civilian casualties but can easily detect and eliminate belligerents), they're also likely to win political points for eliminating combatants.


I dearly hope that the Western cultural fractures don't turn into this.

In any case, this is unlikely to spell victory for the belligerent who chooses this route. Humans are cheaper than self driving military trucks. You can afford to lose them. Literally-literally.

It's only in our cushy relative world peace situation we find ourselves entertaining the idea of spending a million dollars on a truck is somehow cheaper than losing a ten thousand dollar truck with 6 men on it.


I also dearly hope that Western cultural fractures don't turn into this.

And no, humans are not cheaper than self-driving trucks. They appear so at the beginning of a conflict because wars usually start when there's an excess of humans and a shortage of resources for them all. However, it takes 18 years to grow a human to the point where they can fight in a war, and another year or two to train them. It takes 2-3 years to tool up a factory to produce drones & self-driving trucks (and maybe a decade to get the software right), but once you do, you can produce one every couple days. Assuming you can maintain your industrial & technological base long enough to get that factory up, guess which one is going to win?

The limiting factor for the Japanese in WW2 wasn't planes, it was trained pilots - they had no problem crashing the planes into ships with untrained pilots because those were both abundant resources, but were incapable of fighting a sustainable air war.


The military agrees with you. The DARPA Grand Challenge was what really got autonomous ground vehicle research going.

https://en.wikipedia.org/wiki/DARPA_Grand_Challenge

The US Army is still putting funding into a variety of autonomous vehicle programs. For example they want to send a convoy of vehicles to resupply a remote outpost without putting human drivers at risk.


The reasons I don't believe you're correct are (a) in war, most ground truck transport already involves transporting non driver humans. Boots are indefinitely the workhorse of occupation, (b) active remote control works great in situations where human drivers create extraordinary risk, (c) it might never even be possible to create self driving cars that perform better than humans, no matter how much the military wants it or how many years are invested.


Prosthetics, exoskeletons, and brain-computer interfaces too.


Your comment is insightful and hilarious.


...and terrifying, I hope, once you get past the hilarity.


[flagged]


Isn't that mostly Hollywood illusion? That the Americans are the Good Guys, paragons of morality and protectors of all that is alive, while everyone in "backward countries" treats their own as cannon fodder?


No, I'd say from personal experience that Hollywood severely downplays reality here. Typical american movies feature villains and antagonists who are still pretty "americanized" in terms of culture.


Well, it depends. The armys certainly do value their highly trained soldiers and pilots. Just alone for.the fact that they invested a lot od time.and money in them. And during normal times also the ones of the average grunt, because the public does not like casualities. But in a serious war, nobody cares too much about a normal soldier.


To me, a fundamental question is "is this a problem?"

Humans have similar problems too. Our intelligence is trained by experience and evolution to operate within certain parameters. More concerning to me is the fact that a human can conceptualize "couch" from a single example. ML algorithims needs to see thousands of couches before they can classify them.


Preface: I didn't intend for this to be so long... I just got on a roll...

I'm not sure it's true that we conceptualize couch from a single example. We've all seen thousands of couches in many different contexts, homes, schools, doctor's offices, on TV. If you had a person who had never seen a couch or a zebra before, and all you could tell them was right or wrong, it would probably take them (a lot?) more than a single try before they could distinguish between couch and zebra without fail.

The only reason it's easy for us is that we have this giant scaffolding built around zebras as living things that look like a horses and donkeys, and couches as inanimate objects that look like things people sit on and regularly have some pattern on them.

I think people tend to underestimate just how much "training" in the ML sense goes into a human brain. After all, humans spend the first... decade? of life incapable of all but the simplest tasks. That's a decade in which our brains are consuming petabytes of information and processing it constantly.

Watching my children grow up, the way they learn seems remarkably similar to the way computers learn. If you watch a baby learn to move, it is purely an exercise in going too far in one direction and then too far in the other direction, repeated for basically years until they're coordinated enough to move roughly like an adult by the time they're 3 or so. It's the same with words and concepts too, they're just guessing based on things they already know (and the guesses are often waaaay off, because they don't yet know much), but they're constantly filing things away into their frameworks until their frameworks get big enough that this too resembles the way adults learn.

By the time we're adults, the human brain makes ML algorithms look pathetic, but that doesn't take into the account the decades long head start that our brains got.


> In the 60s, there was a rather famous project where scientists thought that solving image recognition will take them few months at most.

What project was this?


I thought it might have been a reference to (LISP inventor) McCarthy's famous Dartmouth Conference in 1956, where they thought that 10 people over 2 months could make "significant advances" in making machines "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves".

https://en.wikipedia.org/wiki/Dartmouth_workshop

EDIT: Probably the Summer Vision Project, as Tome said in sibling comment.

https://dspace.mit.edu/handle/1721.1/6125




Another way of putting it: AI research of the 60s-80s was extremely impressive and is ridiculously underappreciated right now. Many ideas were simply way ahead of their time. It's sad that most software engineers today do not bother learning anything about the past of computing, assuming everything made in those decades is automatically "outdated". (This is not restricted to AI. For example, most OO programmers don't know what is Smalltalk.)

The guy the article about says as much.

“I’ve stopped giving general advice to other researchers,” said Dickmanns, now 82 years old. “Only this much: One should never completely lose sight of approaches that were once very successful.”

I really feel for those researchers. Seeing how your life's work is ignored by people who brute-force their ways through the problems you've already solved must be rather depressing.


> Given Moore's law, I'd be surprised if we ever see another AI summer after the current one peters out.

On the other hand, the article notes that today's AI Summer gets its funding from a constant stream of profitable applications (though less dramatic than the blue-sky hype). The field is less reliant on animal spirits of academia, and has a more reliable source of funding.

I still think things like self-driving cars are far in the future, but we're at a point where ongoing research into e.g. machine vision can be funded by less safety-critical commercial applications.


Expert systems had industry funding in the 80s. And contrary to what some people here keep implying, they delivered results. Some of them are still used right now.


I completed my PhD in the subject of improving back propagation - in 1992! Many people are startled when I tell them how old the idea of NNs are :-)


It's not widely known, but Alan Turing tinkered with neural nets in 1948, see [1], with his boss at the time calling Turing's work a "schoolboy essay". For a history of the subject, see Section 5.1 of [2]. Note that some parts of [2] are rather controversial.

For a historical account of back-progagation, which itself was 'invented' many times, see [3, 4].

It is also not widely known that regular expressions, one of programmers' favourite tools, were introduced by Kleene in [5]: he was interested in characterising the behaviour of McCulloch-Pitts "nerve nets" (early versions of neural nets) and finite automata.

[1] http://www.alanturing.net/turing_archive/pages/reference%20a...

[2] https://arxiv.org/abs/1404.7828

[3] https://www.math.uni-bielefeld.de/documenta/vol-ismp/52_grie...

[4] http://people.idsia.ch/~juergen/who-invented-backpropagation...

[5] S. Kleene, Representation of events in nerve nets and finite automata, see https://www.rand.org/content/dam/rand/pubs/research_memorand...


Great references (though I can't open [4]).

I sometimes think the contributions of numerical analysis pioneers is undervalued (eg Wilkinson, Kahan, Golub). Backpropagation is really just reverse mode AD, and has been discovered many times, as you point out.

https://en.wikipedia.org/wiki/Automatic_differentiation

EDIT: [4] works now.



Too bad that so many people working in the field or ones just reading about it now assume the technology was just invented these days by Tesla or Waymo. It's about who shouts the loudest.


I like your pessimism. But I'm also optimistic that interesting (mostly non-AI) stuff will happen when Moore's Law isn't a substitute for other innovations anymore.

I am especially looking forward to Ray Kurzweil eating his hat when the singularity is delayed indefinitely.


You'll have a long wait - I think he projected 2045.


Well crap, I remembered something much closer.


He said Turing test 2029. That may have better hat eating possibilities.


Why was it again, that everyone reels in distaste from the church of singularity... Oh, yes...

"Oh, powerful deus ex machina yet to come, the one above me is lost, take him into the loving arms of your personality review and do not judge your faithful followers by his threshold."

the same reason all other powerful organized religion is distasteful. Rampant Stasi denunciation and a constant attempt to subdue all opposition voices.


We get bombarded with the term at nearly every turn. We see it in movies, television, and video games. So for the part what researches expect AI to become is not what the public thinks AI is. Marketing has a way of taking something that sounds cool, is just too complex to fully understand, but not so complex that they cannot convince you that you need it, and from there the terminology gets corrupted.

Sure most people dismiss most over the top depictions in tv, movies, and games, but it is surprising how many out there still believe governments and the mega rich have something "close".


I agree that linear thinking has its faults but I wonder if certain things can’t “add up” to opportunities for breakthroughs. Ie exascale computing, quantum scale Boltzmann machines, and faster communication between cpu cores. I’m wondering what new technologies will be possible after modeling at the scales possible by exascale.


At least to some extent, how you sees this relates to what you want/think AI to eventually be.

A while back, I heard a particularly curmudgeonly interview with Noam Chomsky, just commenting on the state of AI and recent AI programs (Watson, Deep blue..).

Chomsky basically called all these statsistical AI programs brute force parlour tricks, nothing to do with computer science or artificial intelligences. He stopped just short of calling the whole thing fraud. Real AI are things like bird flocking algorithims (did he work on something like this, can't remember). I think the jist was that to be an AI program, a program must embody a theory of intelligence. The theory-less nature of modern statistical algorithims seemed to really peeve him. maybe he's right.

Anyway... I don't remember what point 9if any) i was driving at. I think there are abstract values at play, in how we narrate progress. Since we still don't know what inteligence "is," or even if it is a distinct thing, it's hard to know what the real milestones are. Personally, I like the fully realized, running at scale "proofs," but that probably says more about my ability to understand theory than anything else.


> Chomsky basically called all these statsistical AI programs brute force parlour tricks, nothing to do with computer science or artificial intelligences.

The counterargument is our own brains use brute force parlor tricks to solve problems. We learn by a priori (pre-built models), by experience (reinforcement learning) and viewing and weighing features that we observe against prior experience (NNs/ANNs).

The entire point of ML is to produce approximately the same result a human does. Whether it's a parlor trick or a perfect simulation of the brain is irrelevant. There's no advantage to something novel and more "organic" unless it produces better results.

This is a common problem I see - people are always trying to compare human reasoning to artificial reasoning rather than looking at the output. The output is all that matters.


> This is a common problem I see - people are always trying to compare human reasoning to artificial reasoning rather than looking at the output. The output is all that matters.

Entire schools of philosophy disagree completely.

There was also a fantastic article here on HN a while ago [1] by Douglas Hofstadter where he ranted against google translate and similar, by showing that the current state-of-the-art statistical brute force word nearness ML cluster doesn't understand the text it's translating, and therefore will always be lacking, always have cases that it simply cannot solve.

He's basically saying that Weak AI can only get to 99% when it comes to translating, but we would need Strong AI to get to 100%, and Strong AI is probably impossible.

[1] https://news.ycombinator.com/item?id=16296738


> Entire schools of philosophy disagree completely.

I'm sure they do; it's very much a matter of debate.

> brute force word nearness ML cluster doesn't understand the text it's translating, and therefore will always be lacking

Humanlike reasoning doesn't solve this problem. Understanding does not prevent this abstract "lacking." Strong AI is not suddenly perfect.

> always have cases that it simply cannot solve.

This is an inherent problem with reasoning, not with strong vs weak AI. Which leads us back to:

> Strong AI is probably impossible.

If we measure "Strong AI" as infallible, then yes. If we measure it by "understanding," then no. Which is why I care more about results than the philosophical debate over understanding/consciousness/humanness. 99.999% is acceptable if 100% is impossible.


Given that we now have superhuman image recognition (in particular, even highly trained experts have a hard time classifying dog breeds with extensive time per sample while modern NNs can classify them with 99+% accuracy), I don't think this is true.

We will probably exceed human's ability to come up with reliable training data before we reach strong AI.


So you're basically saying that Weak AI is good enough, and who cares if the machine really understands what it is doing? As if the ever-increasing accuracy numbers somehow move us closer to Strong AI?


It's hard to say for certain, but I would say that observed by humans, we'll probably get computers to the point of appearing to be conscious, with minds, passing the most challenging tests we can construct.

It seems like a reasonable extrapolation from modern technology. Think about text to speech producing sounds that are hard to distinguish from humans, deepfake producing images that look like they were actually filmed. Those are all weak AI. They don't need strong AI- just tons of labelled data- to produce things that can fool inexperienced humans.


This 'common' problem is addressed by the distinction between 'strong' AI and 'weak' AI.


While nice, the general public is not exposed to the details of what "AI" is and how it's designed. It's that uphill battle against expectations that makes comments like "AI is a bust" or "AI hasn't really advanced since the 60s" difficult to counter.


Thinking back, I wonder if that interview mentioned Turing test-like.. tests. I wonder what Chomsky thinks of these, as gauges of progress.


At some level:

Alice: “Computers aren’t doing real intelligence, because they don’t think the way we theorize we think.”

Bob: “Computers are doing real intelligence, because we don’t think the way we theorize we think.”


Given how much the "brains" of AI is built upon fairly old (and very old) mathematics, I find this sentiment to be somewhat myopic. Yes, deep learning has its roots in something 50+ years old. Yes, we still use Bayesian concepts unearthed a century or more ago. Probability theory took foot 80 years ago. So what? Do we throw out the old models and approaches simply because they're old?

But to say that AI "hasn't really come very far" because its technological achievements outweigh its computational side doesn't really make much sense to me.

Progress is slow. Seeing a story like this and saying "wow, and we're only now seeing this in use in practice today" makes an implicit assumption that once something is proved it should become widespread. The original self-driving cars were not practical; it took advancements in computational power and speed to make it so. That's "coming far." We don't need some novel ML approach to advance AI.


I don't know what your perception of my original sentiment is, so let me elaborate.

People generally assume, likely because of the astronomical rate of quality of life expansion due to technology and engineering in the 1900s, that things keep "going forward" at some rate. Often they think this rate is exponential. I see people citing AI implementation engineering as their primary examples of this. It's frustrating because it's not true. AI is old. That's fine and good and wholesome. But there is no real progress in AI theory, it's all practical engineering. Unfortunately, theory and engineering are often conflated with "technological progress" without realizing the former limits the latter with a very hard ceiling.


Another nice example are Fourier transforms. It took quite a while from 1822 until we had great applications for it in microcontrollers and DSPs for sound processing.

However I can also kind of see the argument here that the deep learning ideas don't seem to have such a fundamental idea behind them. We modeled how biology works at a very simple and abstract level. Now we have more computing power (and especially more data) so we could find a few applications.

But to me at least it doesn't look like a solid theory to build upon. It doesn't deliver insights into how things work like Fourier transforms did with signals and Bayes did with probabilities.


> We modeled how biology works

That's the popular story behind neural networks. Like any good popular explanation of a STEM concept, it's as useful as it is inaccurate.


> That's the popular story behind neural networks. Like any good popular explanation of a STEM concept, it's as useful as it is inaccurate.

To be fair, our own understanding of "how it works" in practice is not necessarily set in stone. Approximating our belief in how it works is the best case scenario, and expecting it to perfectly mirror human reasoning is unrealistic and frankly not all that appealing from a problem-solving stance.

Human reasoning has a lot of built-in buffers and fault tolerance, and it is, itself, a fairly narrow AI.


>at a simple and abstract level

It would be nice if you quoted honestly.


Sorry, I wasn't meaning to quote mine. I honestly didn't think that part was relevant to the point I was making.

Neural networks have taken some inspiration from biology, but that doesn't account for the vast bulk of the work. Biology analogies play a much bigger part in popular explanations than they did in the development of the technology.

(edit) Here, for example, take a look at the original paper on perceptrons. Whole lotta math. This having been released during one of the AI flaps, there was also a fair bit of psychology talk. But not much indication that he was just trying to copy a brain and otherwise didn't understand how it works.

https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-...

Similar happens with deep learning. They are grounded in theory, most notably a mathematical proof that deep MLPs are, in principle, capable of learning any mathematical function. You just won't see much of that stuff if you aren't actually reading the papers, because most people aren't that interested in vector calculus. Much easier to explain things by analogy.


Fun fact: the FFT (the algorithm that is actually used in applications today) was originally developed around 1805 by Gauss, but he didn't publish the work. Sorta crazy since it precedes Fourier's work.


This is a bit of a stretch. The algorithm used today is due to Tukey. Gauss's work was great, but it's not the algorithm used today.

Also, another interesting fact about fourier transforms- in the old days, computing FTs was too expensive, so physicists used lenses- which perform approximate FTs- to decompose signals into spectra.


From https://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf

> Thus, Gauss' algorithm is as general and powerful as the Cooley-Tukey common factor algorithm and is, in fact, equivalent to a decimation in frequency algorithm adapted to a real data sequence.


> Progress is slow.

And incremental. Sometimes you'll hear people say that deep learning is nothing new, because the first deep neural networks were developed many decades ago, or because CNNs and LSTMs were developed in the late 90s, etc. That mindset glosses over a lot of the incremental developments (e.g., getting a decent handle on transfer learning) that were necessary to get from the initial academic demonstration of the idea to its successful commercial application.


People are often surprised that many planes can land completely autonomously on airports that support autoland (I think they call it ILS CAT 3). And that work on systems like that started in the 60s.


Automating systems to interact with other automatic systems, likes planes and ILS, is a lot easier than automating systems that need to interact with humans.


And it's even harder to automate systems that have to derive data from environmental cues. A line-following robot is something simple enough that undergrad students can build, and making it stop if it detects obstacles is a good choice for extra credit. A road-following car that stops when detecting obstacles is way, way, way more complicated than that.


I don't think stopping for obstacles is the hard part of autonomous driving.


It's one surprisingly hard part of it. Note how Teslas enjoy driving into stationary objects with certain properties, say.


There is a road (Norway, Drammen-Svelvik) that I travel once or twice a week in my 2015 Tesla S 70D that often has a car parked on a particular corner. The car is directly ahead of my car and side on; my car reliably interprets this stationary car as being on the road and that we are on a collision course and applies the brakes even though the road actually bends to the left and the stationary car is not in the way. If I didn't know the road I might well have reacted in the same way. I'm not sure that improving the sensors would necessarily change things, the software that interprets the sensor output must be improved too.


It is my understanding that this is due to having the wrong kinds of sensors.


It gets harder when you are:

* Out of the lab

* On roads that aren't level

* When the obstacles move

* etc


It's very complicated. Not bumping into stationary objects on the road is fairly simple with the right kind of sensors, but most obstacles you encounter in real life are neither permanently stationary nor permanently in motion, and are not always on the road.


So outlaw human drivers?


Good luck. Don't forget pedestrians and cyclists.


Doesn’t have to be total - just in designated streets. One way streets work well enough. It wouldn’t solve the total problem, but I bet it’d solve enough to matter in the city.


I don't like the idea of outlawing humans from ever increasing parts of public space. Cars get too much already.


I think it would be more like "take a fraction of the space cars get today but dedicate it 100% to the machines to make it as efficient as possible".

So computers driving cars on a one lane road could probably do it more efficiently than humans do today on a 3 lane one.

The idea isn't to give cars more space but rather to give them less and use it so efficiently that you actually get even better results then you had before.


You are assuming that people will reliably follow the law. That's a bad assumption. I've seen people driving the wrong way on one-way streets multiple times. As long as there are any human drivers at all they're going to do stupid or illegal things, and any autonomous vehicles will have to cope with that reality.


Autoland systems have come a long way, but they're still unable to recognize and cope with certain critical mechanical failures. The human pilots remain actively engaged in flying the airplane, ready to take over in case something goes wrong. In car terms it's like Level 2 autonomous driving.


To me the major difference is the pressure differential of today's wave. It's a lot larger market than 80s AI, so it may provide more resources to overcome the hurdles. Or maybe it will get into another "comatose" decade.


> Given Moore's law, I'd be surprised if we ever see another AI summer after the current one peters out.

I assume you are referring to the often declared demise of Moore's Law. Reports of such demise have been greatly exaggerated: https://ourworldindata.org/grapher/transistors-per-microproc...


> Given Moore's law, I'd be surprised if we ever see another AI summer after the current one peters out.

This reminds me of a remark, possibly made by P. M. S. Blackett, which went something like this:

"We've run out of money, now we'll have to start to think."

It is possible that stagnation in computing performance will be just the stimulus that the field needs.


"hasn't really come very far in a few decades"? How about AlphaGo Zero?


What about AlphaGo Zero? I'm honestly not sure why a system that plays board games is seen as some kind of giant leap in AI. Seems mostly a result of Google's marketing.


You're really going to pretend that cracking Go wasn't a big moment for AI?


In 1996 IBM made a system that was better than any person at playing chess. Chess has roughly 400 moves possible at any point.

In 2016 Google made a system that was better at any person at playing Go. Go has roughly 130,000 moves possible at any point. That's approximately equal to 400 * 2^9.

Moore's law states that the number of transistors (which is a proxy for computing power) in CPUs doubles every year. 2016 - 1996 = 20 years of computing power growth.

In short, after our computing power has increased roughly by a factor of 2 ^ 20 someone made a system that plays a game that's 2 ^ 9 time more complicated than chess. Why is this seen as some giant, surprising leap forward?


Because a chess engine isn't scanning 400 possibilities, and a Go engine isn't scanning 130k (where'd you even get that number?) possible game states.

For example, a brute force search of a 19x19 Go board to a depth of 20 would yield on the order of 361^20 = 1.4E51 game states. With a reduction in search depth and better algorithms, state-of-the-art engines might cut this down by ten orders of magnitude, but can still be beaten by rank amateurs.

Deepmind's approach to board game engines blows all previous approaches out of the water. The claim that their success is incidental to Moore's law is categorically false.


Even chess isn't tractable in terms of pure brute force search. And yet a computer won against arguably the best human player in 1996. It was mostly a PR stunt by IBM.

We had 20 years of doubling of computing power before Lee Sedol match. In those 20 years there were many other AI programs that have beaten various world champions at other board games (and no one cared). There were other good Go engines before Alpha Go. They would beat most human players in the world.

Why AlphaGo of all other programs is seen not as increment, but as some giant leap forward? It doesn't solve a new class of problems and it doesn't use any fundamentally new algorithms.


Board game playing programs consider much more than every next possible move from a single state. Any decently skilled amateur could easily beat a program that can only reckon one move into the future. I see the argument you're trying to make here but the numbers you're citing do not support it, in my opinion. DeepMind is considering much more than the 130k moves possible from a single state during each move of play.


It was ML, not AI. It literally just runs enormous amounts of games to determine parameters that guide a search tree. That's nice, but was it truly a breakthrough technologically speaking? Hard to say.


"Ever" is a long time. But yeah, I wouldn't be surprised to see it die down again few a few decades at least.


This is truly fascinating. As an engineer working with autonomous driving, I've always been under the impression that sel-driving car research exploded after the 2005 DARPA Grand challenge.

It's embarrassing that I never knew about PROMETHEUS given that it was a €749m pan-European collaboration on autonomous driving.


Does Navlab not predate this work? It started in 1984. https://en.m.wikipedia.org/wiki/Navlab


The Navlab project and Dickmanns' projects (VaMoRs / Prometheus) were mostly concurrent, but, yeah, I think CMU started & published initial work first.


Can somebody who understands what Dickmanns did, can you explain? Was he actually doing object detection on road elements? How did it stay in the lanes?


It didn't use GPS or any sort of pre-fabbed maps. It drove purely by eye / machine vision using four cameras. No radar or lidar.


I'm asking, specifically, what machine vision technology it used? In 1994 there wasn't much machine vision (that I'm aware of) that would even be able to do lane-keeping, except under incredibly idealized conditions (empty roads, no weather, good lighting). If so, I would request that whomever claimed this guy invented the self driving car temper their claim.


Here is what someone on the project had to say about it:

>Our vision system relied (heavily, not exclusively) on sensing prominent horizontal features symmetric about a common centerline. (Cars and trucks, especially at the time, have a lot of horizontal lines: bumpers, window top/bottom, valence, etc.)

Thread here: https://news.ycombinator.com/item?id=10328687


Ah! Yes, I also watched the video and I see they are identifying little "cups" (one horizontal line and two vertical).

Bridges also have many hoizontal features symmetric about a common centerline. I would expect many false positives driving near bridges.


I do believe they DID use a radar system. A HN user that worked on the project did mention something about using such a sensor.


The linked video states they could recognize obstacles years later, so I think there was no reading of signs etc. involved. It must have been only lane mark recognition that was used to steer.

The van from the vid came years after the two mercedes s cars. I think there was no breaking for obstacles, exiting the highway, or even using interchanges. It was simply holding a lane (maybe change it when told to) and holding a set speed.

Correct me if I am wrong, as I did not read into the experiment.

https://www.youtube.com/watch?v=I39sxwYKlEE


OK, if this is correct, then I wouldn't call this a self-driving car, or basically it was L1.

Not interesting, and frankly, takes away from the dramatic results of recent self-driving cars.


This does not seem to be the case. Check out this guy that worked on the project:

https://news.ycombinator.com/item?id=10333126

The had auto-breaking, car detection and lane merging the least.


The text you linked to isn't very convincing for any of those. it describes a false positive that nearly caused an injury to the head of the project.

I think it's safe to say that while this was pioneering work, none of the technology it used was acceptable in terms of safety or generalizability.


I said they had those functionalities, never claiming they had neither a perfect track record nor something comparable with what we have today.

I pointed that those functionalities existed and presumably sometimes worked, as opposed to OP that said they did _not_ have them. At the very least, the false positive proves they had emergency breaking.

What I find amazing is what they managed to do with much less powerful HW than what we have today, with much more primitive camera systems. To put it into context, the project is older than I am and they managed to achieve things that most of my live until pretty recently I would have categorized somewhere between amazing and impossible.


They had emergency braking... which just means applying the brakes. If you false positive emergency brake, that's worse than doing nothing (increased risk of crash, integrated over a wide range of possibilities).

My concern is that this article massively overstates the results that Dickmanns had in a way that implies his work had any real chance of being used in a production environment. There is a substantial difference in the sorts of Probabilistic systems that Thrun and others build today.


Whether today’s approaches are acceptable in these terms also remains to be seen.


Just an odd observation: in one of the photos, the UniBwM car has a Bundeswehr license plate (Y-320624), yet there's no mention of any military involvement in the article. I'm guessing here, but maybe that was just a trick to get around some regulations that only apply to civilian vehicles?


Diekmann was professor in a military university. It's mentioned in the beginning.


Ah yes, silly me. "[...] in 1975, still under 40, he secured a position at a new research university of Germany’s armed forces" - must have missed that earlier. Thanks.


Was Thrun already involved with German Computer Vision ?

Sad that new AI is not doing the deeds on checking prior art..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: