Faster at 32 bit math but you don't always do 32 bit math, particularly people didn't do a lot of 32 bit math back then. In fact 32 bit math is where the 6502 goes to die because it has nowhere near enough registers.
In terms of real experienced performance in the applications people ran at the time the 68k was a disappointment.
In terms of real experienced performance: I had both an Apple II+ and an Amiga and you couldn't be more wrong if you tried.
And we didn't actually do much 32 bit math on the 68K except for address calculations, we mostly did 16 bit math, and in fact only the address ALU was 32 bit (2 16 bit ALUs to be precise), 32 bit ops on data registers had to go through the 16 bit data ALU twice...which means that if you had to do 32 bit arithmetic, you could win some perf if you could express it as address calculations (LEA, I am looking at you!)
And of course 16 bit math was common enough that the Apple II included a virtual 16 bit machine in its ROMs, for code-density purposes, but with an obvious further speed hit [1]
But even comparing 16 bit instructions 1:1 on a 7 MHz 68K (Amiga, Mac) with 8 bit instructions on an Apple II the 68K is faster, and the 1:1 comparison doesn't make sense because the 68K has so many more registers and more powerful addressing modes and of course does more work per instruction, and and and.
So not really sure where this idea of slow 68K came from.
Maybe it is due to Wirth's law: "Software gets slower more quickly than hardware gets faster". The 68K was so much faster that people were much more ambitious in what they tried.
I don't recall anyone who actually put it into a system regretting the choice. Unlike the 6502, the 68k also had a viable path forward, which only really ended when all the workstation vendors + Apple decided to jump to RISC.
In terms of real experienced performance in the applications people ran at the time the 68k was a disappointment.