The truth is we started there. But for any reasonably-sized, complex codebase this just isn't going to work as the context window isn't sufficient and moreover it becomes harder for the LLM to reason over arbitrary parts of the context.
For the time being, indexing and retrieving a good collection of 10-20 code chunks is more effective/performant in practice.
For the time being, indexing and retrieving a good collection of 10-20 code chunks is more effective/performant in practice.