As an existence proof, people drive through fog every day. We prove that it's possible to take binocular visual data and convert that into a series of control signals that will propel a car through fog. Sure sometimes accidents happen, but every day thousands upon thousands of people safely navigate fog in e.g. San Francisco or London. I see no reason machines shouldn't be able to perform at least as well one day.
Just as a point of reference here - I'm not taking sides in this debate, I vividly remember reading about the 1999 Toronto 87 car pile up.
Fog on a straight road caused an 87 car pile up after a trailer jackknifed. The point in the article I remember at the time was a mother recounting watching her 14-year old daughter burn to death. Mom got out, but the daughter had been pinned by her leg.
The quote from the time was ""Mama Sheila, please don't let me die. I'm only 14," Marceya McLamore begged."
I suppose my point is that neither human or machine are good enough yet. and not to treat this as an abstract academic topic, it's real people.
I think olivewell's comment in this subthread is what dooms the idea of camera-only self driving for a really long time horizon: you basically need general AI for it.
I work in machine learning in a different area (healthcare) so my perspective may be incomplete, but what I see in ML/AI is that models are really good at memorizing, not so good at understanding. What I mean by that is that a human navigates the world by knowing what a car is, what fog is, what a person is, what a plastic bag is, all sort of things. Object detection can get much of the way there, whether it can get enough of the way there is an open question, but ok we'll grant that. Whether AI/ML can actually make the step from knowing what the object is to knowing how that object interacts with all the other objects in the world is another.
In other words identifying an object is just step one, understanding what that object means in context is the next absolutely required step, and it's way harder. We have a leg up as humans so we can rely on visual alone, but machines almost certainly will need other sensors that report additional information because they don't "get" context in the same way. And I'd say it's an open question whether even with those sensors, they'll be able to get there. I hope so, but the game is far from won.
Machines will be (and already are) held to a much higher standard than human drivers— there are a lot of reasons for this, ranging from the emotional to the pragmatic.
Just because thousands of people roll the dice every day driving in unsafe conditions does not mean that we should tolerate machines doing so.
One note is that perhaps people roll the dice because they want to feel some control and forward momentum. But if you're sitting in the car, being autonomously driven somewhere will you be so impatient?
I assume the car will give you an ETA, so you'll check the clock and go back to your book/social media. I'm okay with "slower, but smoother".