I think the Legg&Hutter definition "Intelligence measures an agent’s ability to achieve goals in a wide range of environments" [1] is quite relevant for this issue - once we consider not a technique for a specific task, but a single system adapted for a variety of tasks and which (and this is the key part!) we also expect to perform well on a new task handed over to it, then that does mean that we are trying to build systems that are explicitly "more intelligent" according to this definition.
Yeah, the working definition I use, which is a simplification of the academic definition, is "second order solutions/algorithms", where rather than a program defining a solution to a problem given some set of arguments, you have a program that takes a problem definition as input, and produces a process that can solve many instances of that problem.
That ends up being very different from the "AI is anything a computer can't do yet" pop definition, because it includes everything from SAT solvers, propositional logic engines, and Bayesian network analyzers, to the more bleeding edge deep learning stuff.
[1] 2007 Shane Legg, Marcus Hutter A Collection of Definitions of Intelligence https://arxiv.org/abs/0706.3639