Yes that may be true. Instead of trying to reproduce it, they attempt to build upon it. The projects which successfully build on previous projects become original research articles and the cycle continues.
tl;dr A false technique can be described and it can be hard or impossible to detect the technique is flawed by using it
For example the paper at http://www.jstor.org/stable/222500 describes a method of using the stationary bootstrap to eliminate data snooping bias in studies of "technical analysis" in finance.
Published in the Journal of Finance in 1999 at the time I worked with it in 2010 it had been cited over 500 times.
The proof in the paper is inscrutable. I could find no one who could explain or verify the proofs at my institution.
I attempted to reproduce the results in the paper which is where the problems started. The authors did not give enough information to do this in all cases, but in some cases I was able to reconstruct the algorithms.
They did not perform as described. Of the five algorithms I could reproduce (from memory) one of them worked roughly as described and two others were not completely hopeless.
Looking more closely I realised that the original authors completely disregarded important factors in the implementation of the techniques described by the algorithms. Transaction costs. Allowing for transaction costs (a difficult but not impossible task) the effects noticed by the authors disappeared completely.
Looking even closer I examined the assumptions behind "White's Reality Check" that the paper relied on the Stationary Bootstrap by Politis and Romano from 1994. But financial returns are not stationary. Not at all.
So dodgy logic, misuse of statistics, irreproducible experiments, ignoring important aspects of the data and I suspect wishful thinking add up to a paper that is comprehensively false. Cited hundreds of times and used many times to verify other results.
Which exactly proof are you talking about? I briefly looked at the paper (I've seen it before, but it's been quite a while...), but it seems that they pretty much use a previously known approach, and the only proof in it is simply "replicated for convenience of the reader". Also, this would certainly be neither the first nor the last paper that ignores transaction costs, and their omission does not really invalidate the argument (even if you cannot profitably trade on an anomaly, why is it there in the first place?), so I don't think you can accuse them of bullshit just based on that.
Non-stationarity is a problem though. Still, you need a bit more to call complete bullshit on this imo -- e.g. say something like "after switching to a different bootstrap method that works in presence of stochastic volatility the result suddenly disappears". Perhaps that is what you do in your paper :) I should take a look.
All that being said, not many people believe in technical indicators working in equities these days, or anywhere really (FX was a bit of a holdout -- not sure if it still is?), so perhaps science has kinda sorted itself out in this case :) I've seen far worse cases of snooping and non-replicability though, and some are still going strong.
Non-stationarity is enough to call bullshit. Really! How can a stationary bootstrap be used on data (financial returns) that are so prone to non-stationarity?
Irreducibility is also enough to call it very bad and should not have been published in that form. They talked a lot about their algorithms, without properly describing them.
Ignoring transaction costs is also enough to call bullshit. It is a mistake that should only be made by rank amateurs, and it is the most common mistake made by amateurs in the Technical Analysis field IMO.
It is a very very bad paper but because it gives a technique that can be used to show that TA is possible it is much beloved by researches in the field.
My own conclusion is that (generally) TA is not possible to do profitably at these time scales.
"Non-stationarity" is really an umbrella term; the truth is, there is no single authoritative model of asset returns, and really there will never be one. This should not preclude all statistical analysis though, and it is done by making simplifying assumptions, just like in every other case, and in every other field. Your claim is that they are too strong in this case, but it's a claim that can be fairly easily shown empirically or in simulations, and it really should be IMO, particularly since other bootstrap methods exist.
I agree about algorithms; rather unfortunately, this is true in more than this one paper. There certainly is movement towards requiring people to make their code fully available, but we are not quite there yet. But if you describe your failure to replicate, this is definitely a strong argument that the authors would IMO need to address.
Ignoring transaction costs would be a major problem if the paper's main point was "we found a strategy returning X% above market, it's awesome and people should give us money" -- but this is written for a very different purpose and audience. That being said, today it would not be published without a transaction cost analysis -- but I would have no problem with them saying "with such-and-such costs, profits are not there any more", it would not invalidate the paper at all. But at the time it was written, TC analysis was not as standard in academic literature as it is now.
I agree with you about TA, and TBH most serious researchers are of the same opinion, and have been for a long time -- even at the time of publication, it was a bit of an outlier, and this is not a particularly popular area of research (how many of those citations are in recent top journal articles?). Forex was a bit of an open question last time I checked, but it's been a few years, not sure if it still is.
> Yes that may be true. Instead of trying to reproduce it, they attempt to build upon it. The projects which successfully build on previous projects become original research articles and the cycle continues.
Whether this "works" or not depends on how the previous projects enter into it. If the results of previous projects are used as assumptions to justify the methods, data, etc. of the subsequent project, there is no check and we risk the research becoming a house of cards which could collapse due to faultly, untested assumptions that were used.
If the subsequent projects are performed in such a way that they also test the previous results/assumption, this can be avoided. I can't tell from your wording which you are suggesting, though it seems to lean towards the former.