The tragedy is that GOFAI did all these things as built-ins.
Procedural expert systems have been doing introspection, backtracing,
declaring confidence intervals etc since the 1960s. Layering
"assurance" on top of inherently jittery statistical/stochastic and
neural systems seems to misunderstand how these models evolved, where
they come from and why there are alternatives.
While horses don't consume fossil fuels, they also don't solve the problem of transoceanic flight. Yet for some reason every discussion of airplane design ends up dominated by a vocal contingent of buggy whip salesmen.
Carmack or someone said that all of the pieces needed for AGI are already solved, it's just a matter of someone scouring through 40+ years of AI research and finding the right papers/techniques and putting the concepts together
And while airplanes don't solve the "problem" of human connectedness,
it's inevitably a conversation led by advocates of transoceanic flight
still hankering for flying cars and living in the Jetsons world of the
1950s futurists - long after transoceanic flight has become an actual
problem.
Some things they're attempting:
- Creating explainable outcomes by tracing the inner works of ML models.
- Looking for biases in models using random inputs & looking for biased outputs.
- Using training sets with differently weighted models to find attacks and biases.
etc.