There are similarities and differences. The thing I wrote to run everything locally obviously doesn't call out to external services; it runs everything it needs locally. I also didn't use the xDS Envoy APIs, instead opting to statically generating a config file (though with the envoyproxy/go-control-plane library, because I do plan on implementing xDS at some point in the future).
What I have is as follows. Every app in our repository is in its own directory. Every app gets a config file that says how to run each binary that the app is composed of (we use grpc-web, so there's usually a webpack-dev-server frontend and a go backend). Each binary names what ports it wants, and what the Envoy route table would look like to get traffic from the main server to those ports. The directory config also declares dependencies on other directories.
We then find free ports for each port declared in a config file, allocating one for the service to listen on (only Envoy will talk to it on this port), and one for other services to use to talk to that service. The service listening addresses become environment variables named like $PORTNAME_PORT, only bound for that app. The Envoy listener becomes $APPNAME_PORTNAME_ADDRESS, for other services to use.
Once Envoy has started up, we then start up each app. The order they start in doesn't matter anymore, because any gRPC clients the apps create can just start talking to Envoy without caring whether or not the other apps are ready yet. And, because each app can contribute routes to a global route table, you can visit the whole thing in your browser and every request goes to the right backend.
I used Envoy instead of just pointing the apps at each other directly with FailFast turned off because I needed the ability to send / to a webpack frontend and /api/ through a grpc-web to grpc transcoder, and would have used Envoy for that anyway. This strategy makes it feel like you're just running a big monolith, while getting all the things that you'd expect with microservices; retries via Envoy, statistics for every edge on the service mesh, etc. And it's fast, unlike rebuilding all your containers and pushing to minikube.
It kind of solves the same problems as docker-compose, but without using Docker.