No, I think a separate type-level language is a good thing, so long as my data-level language remains Turing complete (and thus undecidable). I would much rather be forced by the syntax -- by the construction of the language itself -- into a guaranteed decidable subset, rather than have to reason things out myself and get frustrated when the compiler doesn't agree with me (or when it used to agree with me and doesn't any longer!).
Now, there's no reason that the type-level language can't be exactly a subset of the data-level language. I'd just rather not play guess-and-check decidability with the compiler the same way I currently have to play guess-and-check optimizability.
Oh I totally agree. Unfortunately for many that's far too radical a notion, despite how unnecessary Turing-completeness is for 99.95% of programs written, and how desirable termination is. First you have to convince the Ruby/Python/JS/Elixir crowd of the frivolity of code mutability.
I think the author meant that you don't want two separate Turing-complete languages, one for values at runtime, and another for values at compile time.
Now, there's no reason that the type-level language can't be exactly a subset of the data-level language. I'd just rather not play guess-and-check decidability with the compiler the same way I currently have to play guess-and-check optimizability.