Dealing with a "more magic switch" this week with Claude.
The AI prompting business feels a lot more like the analog circuit where something being "near" something else causes capacitance or inductance without actually being connected.
To me these tools are pointless unless they're 100% reproducible. If they're not reproducible, they create more problems than they solve.
Thankfully local models are a thing and the proper ones already have a "random" seed and given the same seed an the same prompt shall always give the same picture / answer / etc.
If they can't do that, they're the road to impossible to fix bugs, impossible to diagnose products defects.
Many who rely on non-deterministic models are in for a world of hurt that's going to make these post-modern of insane bugs look like cheap stakes.
The AI prompting business feels a lot more like the analog circuit where something being "near" something else causes capacitance or inductance without actually being connected.
What is old is again new!