So, for some companies, failing over between providers is actually viable and planned for in advance. But it is known in advance that it is time consuming and requires human effort.
The other case is really soft failures for multi-region companies. We degrade gracefully, but once that happens, the question becomes what other stuff can you bring back online. For example, this outage did not impact our infrastructure in GCP Frankfurt, however, it prevented internal traffic in GCP from reaching AWS in Virginia because we peer with GCP there. Also couldn't access the Google cloud API to fall back to VPN over public internet. In other cases, you might realize that your failover works, but timeouts are tuned poorly under the specific circumstances, or that disabling some feature brings the remainder of the product back online.
Additionally, you have people on standby to get everything back in order as soon as possible when the provider recover. Also, you may need to bring more of your support team online to deal with increased support calls during the outage.
If you try to be heroic, you get back to 100% with a bunch of wasted effort and stress on your part.
Because it will be fixed by Google, regardless of what you do or don't do.
After the incident is over would be the time to consider alternatives.