It's worth it to analyse how much that server costs vs how it costs to rearchitect the application. Those 64 cores are probably cheaper then 2 months of a senior engineer's salary.
That's assuming it's actually even possible to use all that bare metal. I worked for a company that tried to optimize its code by buying multiple servers with 64 core processors - one for each customer - when the critical process couldn't use more than one core.
They were mostly kept afloat by using their patent portfolio as a weapon.
The first 64 cores, perhaps. But the 30th set? The 300th set? I recently worked for a company that held very strictly that it was better to buy more cores than change the code. As a result, over half their total costs was dedicated to their AWS bill.
IME this puts the problem off six months (which might be enough!) but the technical debt interest bill comes calling.
If your algorithm is fundamentally shitty, you can scale it up by brute force for a certain time, then it outstrips your ability to do so, and you may need to apply actual competence to the problem. If you have any on hand that knows your systems.
(I'm a sysadmin. I have full confidence in my job existing for many decades to come. Because even in the future, nothing works.)
The problem is that it means you need to fix the same problem again soon, and finding something faster than SSDs which you can store your 1TB of data on isn't really feasible. On the other hand, if you'd just added an index to the column you are querying on...