You can develop your own IQ tests if you want, but chances are any IQ test you create will have a high concordance correlation with other IQ tests, and chances are the predictions you make from your new test will match the predictions of other IQ tests. In other words, it will probably validate the suspected construct behind IQ tests.
The construct behind the IQ test is known as g, or general mental ability, as opposed to domain-specific ability. While it's true that repeated practice with a specific IQ test will boost performance, it won't boost it that much, and it won't generalize very well -- that kind of domain-specific improvement is not that interesting.
One interesting IQ test you might want to peek at would be Raven's matrices. Raven's matrices involve a 2-dimensional grid of figures which progressively change due to some hidden factors. The goal is to rapidly generate hypotheses as to what factors might be controlling for the figures, and to make a prediction as to what comes next.
On an aside, I think that the golden objective of AI is to develop a general thinking machine. Such a machine, when bounded by a goal, would be able to develop its own hardware drivers or make software better than itself. It would be able to tackle a game like Go without fine-tuning by expert Google employees, and that same machine would be able to look at scientific data and develop its own causal theories -- as opposed to a domain-specific Go machine, or a domain-specific Chess machine.
The construct behind the IQ test is known as g, or general mental ability, as opposed to domain-specific ability. While it's true that repeated practice with a specific IQ test will boost performance, it won't boost it that much, and it won't generalize very well -- that kind of domain-specific improvement is not that interesting.
One interesting IQ test you might want to peek at would be Raven's matrices. Raven's matrices involve a 2-dimensional grid of figures which progressively change due to some hidden factors. The goal is to rapidly generate hypotheses as to what factors might be controlling for the figures, and to make a prediction as to what comes next.
On an aside, I think that the golden objective of AI is to develop a general thinking machine. Such a machine, when bounded by a goal, would be able to develop its own hardware drivers or make software better than itself. It would be able to tackle a game like Go without fine-tuning by expert Google employees, and that same machine would be able to look at scientific data and develop its own causal theories -- as opposed to a domain-specific Go machine, or a domain-specific Chess machine.