Important distinction for the author: backtracking is the search mechanism that you're able to use when your logic comprise only predicates.
"Avoiding a configuration like 1,1,1" isn't enabled by backtracking. Rather, it's a constraint placed on the search space (domain), for which, the generator wont need generating solutions that needlessly gets tested.
Your son would probably really enjoy Prolog (and Logic Programming in general) where these concepts are much more developed.
Backtracking is not only a subject of logical programming. This definition [1] is quite general, so a loop with a single predicate can be called backtracking too.
Though I agree that the word "backtracking" is mostly used in a logical programming, and Prolog is a famous example of its implementation and usage.
Of course backtracking isn't only a subject of logical programming, what made you think that when reading my reply?
The use of "when your logic" doesn't limit you to programming languages or imply that one is involved.
I'm not sure why you feel the need to unnecessarily correct me on that.
Also, of course the definition is quite general and, of course, Prolog is famous for excelling at it. Those are precisely the reasons why I mentioned all of the things that I did, in the way that I did.
I really don't understand why you think it's necessary to put emphasis on that a second time.
for x1 in range(1, 10):
for x2 in range(1, 10):
if x2 in [x1]:
continue
for x3 in range(1, 10):
if x3 in [x1, x2]:
continue
looks like backtracking to me. Specifically, the "continue" statement is that "jump back" that prunes invalid search sub-tree. Though, the constrains are expressed as predicates here.
But even in a code where constrains are built into the loop, it also can be called backtracking, since we have our target predicate "is_valid(square)". The search tree is trivial in this case, though. It's just a root and leaves.
In other words, I didn't quite understand the distinction that you were talking about.
I was about to point out his age to the readers here so that they would see and appreciate that. I just watched your son's video about the 8-bit CPU he is building. While viewing it my thoughts were "he must have VERY proud (and rightfully so) parents", and "this kid's future will be amazing (again rightfully so)". _True_ genius
Congratulations to you, and very sincere respect and best wishes for your son.
Great article ... I was another 16 yo who was playing with computers but back then it was assembly language and wire-wrapped circuit boards. Producing magic squares as one of the projects in my first computer class in high school ... it's possible to generate them in O(n) time.
He can improve his program even further with more granular validity checking and backtracking.
You might want to interest him with solving sudoku. First manually and then with a computer program. Then he will be able to abstract and reapply what he learned to magic squares.
The method basically boils down to:
1. Knowing few things about your solution (for example some numbers in sudoku puzzle or placement of some digit in magic square if for example you look for magic square with number 5 in top left corner).
2. Developing new knowledge from what you already know by applying rules (constraints) you know about the solution (for example if you have just one number missing in a row of magic square, you know what it must be because it has to sum to to magic number with all others already known in that row).
3. When you can't get new knowledge in this way just assume some fact. (I have no idea what must be in the bottom right corner but let's assume it's 7 beacause I don't have 7 yet).
4. Try to build up new knowledge on that (exactly like in step 2.) until you reach a solution (this means the problem might have multiple solutions). Or until you generate piece of knowledge that violates your constraints. This means the assumption was wrong and you discard it and all the knowledge that you generated after. But now you have a new solid piece of knowledge (bottom right corner doesn't contain 7) so you can take a note of it and continue with step 2.
Continue this until you either reach a solution, a contradiction (without active assumption) or have no options to assume anything because all possible assumptions have been ruled out. First result means that there's exactly one solution (if you don't have active assumption). All the latter mean that there's no solution.
If you don't have any knowledge about your solution, just rules it has to obey, like starting with blank magic square, then you just start at step 3.
Funny thing is that it's so general that it's how ALL mathematical proofs work (except those that use induction which have some very specific trick for bundling of infinite amount of forward knowledge derivarion steps into one easily applicable step for convenience).
The whole trick with math is that we encode our knowledge about it in ways that are true for infinte number of possible substitutions and transformations. We write a^2+b^2=c^2 not 3^2+42=5^2-4^2+42 and all the others because there's infinitely many of them. So matching all the math knowledge to your premises to generate new knowlegde about your problem to eventually reach the thesis is rarely an easy thing. But the structure of each math proof is the same as solving sudoku just with clever substitutions that match the rules (which can come from entirity of math) to your problem.