So regular CDK is basically a program-driven CFN generator.
Pulumi has a similar model where you build a resource graph at runtime BUT it's also got the execution engine built-in to the tool.
What this means in practice is that you can create resources (like a kube cluster) and then use them as providers (e.g provision state tracked resources with kube api) all in the same operation.
You can also (in your infracode or an importable module) define "dynamic providers", meaning you can easily extend the execution engine to do custom things related to your workload.
As an example, imagine you want to create a cluster, deploy an app, then provision (state tracked) some app resources like an admin user and group via the app's REST API. You can do that without too much fuss.
Neither terraform nor CDK can really do those things very well. TF is not powerful enough language-wise, and in CDK the execution phase is locked away from you.
You wouldn't see much _practical_ difference between CDK and Pulumi (or Terraform!) for that use-case. The workflow would feel almost identical.
Under the hood different things are happening.
Your CDK program would run with all the resources you declared. That would generate CloudFormation script(s) that get submitted to the CloudFormation service for evaluation. The CloudFormation service (running inside AWS) is the "execution engine" and is responsible for creating and tracking the state of your resources.
Pulumi would run your code, build an object graph, then do the "execution" itself - invoke all the required AWS API calls to create your resources (the API Gateway, the Lambda, etc etc) from where the CLI is running. The CLI would also be writing out all the resource state to somewhere.
The tradeoffs are in line with what you might expect. The Pulumi approach is more powerful, but you "own" more of the complexity since it lives on your side of the responsibility boundary.
Some people prefer AWS to be the execution engine; they feel it's more reliable to let AWS provision resources and keep track of state. They like it that AWS is responsible and will fix bugs.
Others prefer the increased control of "owning" the execution engine. This means being able to debug it or extend it with third party or custom providers that let you provision new resource types. They're happy that they don't need to wait for AWS to fix things, they can do it themselves if they have to.
This is not the only difference between the two tools but it is one of the most fundamental ones.
I guess it depends on what you mean by "under the hood". As far as I know it doesn't use Terraform during runtime but it uses the Terraform resources for generating language definitions. It has a lot of interoperability tools as well such as a "terraform bridge" and a tool that converts Terraform projects.