Sure, but a trained human would probably pick up that something is not quite right, especially if family situation has been revealed earlier.
My neighbor has an identical twin and one day his twin came visiting, but I had no idea he was coming. He was standing in front of our intercom that’s been broken for years looking bewildered when I got home from work, and since I’d heard about my neighbor’s identical twin before it didn’t take me long to piece together what was going on, even though I’d never met this guy before and I had no idea he was going to be there.
(I work in telephony, working on voice recognition and voice id now and related tech now.) Current state of voice ID tech we were offered to buy is very dismal - it can identify a person but only if the pool of people is approx. 5K.
Pretty sad if you ask me, and - perhaps - it will never get better as it is tougher to tell a person by their voice rather than their looks.
And in any case, remember children: biometric data is user id and not password!
The failure here is not an AI failure, it is a security failure, and specifically a rush to deploy 'smart' technology without a sufficiently thorough consideration of the ways a sufficiently resourceful and determined attacker might exploit it. Unfortunately, I can say with some confidence (based on the repeatedly-demonstrated persistence of complacency) that we will see many more examples.
Apples claims the risk a random person is recognized with TouchID is 1 : 50000. So while in theory every person has a different fingerprint, there are technical limitations.
If the HSBC voice ID has a similar failure rate for random voices than it is as secure?