I've been getting a similar feeling lately, in that if a thing has been done before and knowledge is publicly available, asking the "AI" (the LLMs) first about it is the best place to start. It looks like that's how things are going to be from now on and it's only going to amplify.
But as the AI gets increasingly competent at showing us how to do things, knowing what task is worth doing and what not is still a task for the one who asks, not the AI.
Edit: One type of question I've found interesting is to make it speculate, that is asking questions that it doesn't know the answer to, but still is able to speculate because they involve combining things it does know about in novel (though not necessarily valid) ways.
But as the AI gets increasingly competent at showing us how to do things, knowing what task is worth doing and what not is still a task for the one who asks, not the AI.
Edit: One type of question I've found interesting is to make it speculate, that is asking questions that it doesn't know the answer to, but still is able to speculate because they involve combining things it does know about in novel (though not necessarily valid) ways.