Did you blow the whistle on this abuse? If this was also in the UK then it looks like exactly the sort of serious breach of protocol that the ICO should investigate. If it involved sensitive personal data about lots of people then it might realistically have enough weight for that to actually happen too.
If you are really serious about privacy, become a CIPP and do contracting work in the field. Be the change, etc. (https://iapp.org/train/)
There isn't a whistle to blow because when you look at the issue closely, it is really about exploitation of the circumstances within the law and processes, and not illegal activity. The methods of bureaucracy are to diffuse accountability so that no individual can be held responsible for a failure or misrepresentation, and that's really just the art of bureaucracy. Just because people are ideological or untrustworthy doesn't mean they are doing something illegal.
Instead, I can participate in public discourse about privacy with other technologists and share insights and tools here, while affecting constructive change where I can.
The misrepresentations that occurred in the process were structured to be deniable and transfer the risk to people who could not reason about them, or deny understanding it. This is just how bureaucracy works, and when you look at what people actually blow whistles on, it is much more black/white cut/dried than an abuse of process you need some depth on to understand. Sucesss for me was getting an executive board to sign a risk statement accepting the risks that our security project identified so that there was a paper trail of this huge decision available via access to information laws.
Further, technocratic people tend to hate privacy because it is a limit on their power, and so the channels for someone "blowing a whistle" are really limited to Snowden level events, and even then stories that tarnish institutions themselves tend to get spiked by editors. It's rarely valuable, when instead you can articulate the dynamics that cause this stuff and make criticism of shitty practices part of the discourse.
The whistleblower use case on this issue would be in the hypotheticals that result from the incentives downstream, where researchers start using the data sets to look up colleagues, health data ends up in a hacked data dump by some APT group, queries like when did a married colleague or politically exposed person get their last STD test, was an applicant for a role vaccine hesitant based on dates, was someone exposed during a public health incident, data attributes being re-used to support a national biometric internal passport system, an insurer or 'alternative data provider' acquires family based cancer predictors, passing community health information to NGO's and activist groups as leverage for their policy agendas, activists produce an online map of unvaccinated and hesitant peoples houses they way they have for gun owners and political donors, normalizing the use of personal health information for policy and governance beyond public health services, billing data for mental health services ends up in a policing intelligence database or shared with border guards, etc. None of these are known to have happened (yet) but they would be the trigger for scrutiny.
In tech, we get issues pushed "down" to us to resolve in ways that may be unethical or illegal, but "good" developers don't ask questions. Compared to what's done at the platform and ad-tech companies, this issue is small potatoes.