Probably only if P=NP AND if the P algorithms have small polynomials.
It's possible that problems which today we only know how to solve with exponential algorithms are in fact solvable with algorithms in O(n^1000000) or whatever, in which case nothing actually changes - for all practical sizes (which is not a lot of sizes at all), the exponential algorithm is actually faster.
However, if someone found an O(n^2) algorithm that solves, say, SAT, that would be a major change in our daily lives - there are numerous problems that are completely intractable today that would become quite trivial.
I'm struggling to see how a proof of P=NP really makes any difference, though- other than psychologically, in the minds of people working on solving problems.
It doesn't have any real practical use for people actually building empirically faster implementations of useful algorithms (like SAT, as you mention).
I should have been clearer in stating my premise: it appears that determining that P=NP itself would have no real direct impact on any existing real-life problems; in nearly all cases, "faster algorithms that impact life" come from a combination of theoretical computer science discoveries combined with engineering (partly CS engineering, partly hardware engineering).
I'd love to find a real counterexample, IE, something in material science, or logistics/optimization, that would make a difference to the world economy. I see CS folks get really excited about this but often times as I poke and pull on the facts, I find that the more theoretical computational complexity gets, the less impact it has on daily activities.
I think I see where you're coming from, at least somewhat. Any revolution would come from finding a fast algorithm for an interesting problem, which would also happen to prove P=NP, but that wouldn't be the important part. The algorithm itself would be.
conversely, theoretically an O(N^100) algorithm could be practical if it's ~N^2 in practice except on very rare instances. remember big-O is an upper bound.
I should note though that big-O is not technically an upper bound on program execution time (worse case scenario), you can use big-O to refer to the best case scenario or the average case. Big-O is a notation for fhe upper bound of a function in general (e.g. f(x) = x^2 + 70x + 9 is in O(x^2)).
I think it's reasonable in this kind of case to assume we are speaking about the average-case complexity, though it's true that many think of worse case, taking the upper bound of the upper bound of the number of execution steps.
I think this is what’s often missing when this topic is quickly presented. And, it makes the whole thing so much easier to understand. If someone were to prove P = NP by showing how to do it efficiently, that would make many computer problems easier to solve. Even problems like “guessing a password”, and that has obvious implications.
If we just knew P = NP because a trusted oracle said so, or someone proved it without showing how to do the conversion, or the conversion was impossibly laborious (a constant 2*100 steps) then that changes much less. If someone proved P ≠ NP that would change the least, since that’s how we operate today.
It’s not about getting the answer. It’s about what getting the answer suggests.
If P=NP, a very large class of optimization problems probably become practical to solve, which would be a big win for science and technology generally, to the extent that I think it would be felt in daily life.
Edit to add: and also as another commenter mentioned, most existing encryption would likely be much easier to break, which would also be felt.