Except government systems for the same. In the Netherlands we had the benefits affaire. A system that attempted to predict people committing benefits crime. Destroying the lives of more than 25,000 people before anyone kicked in.
Do you think they are going to fine their own initiatives out of existence? I don't think so.
However, they also have a completely extrajudicial approach to fighting organised crime. Guaranteed to be using AI approaches on the banned list. But you won't have get any freedom of information request granted investigating anything like that.
For example, any kind of investigation would often involve knowing which person filled a particular role. They won't grant such requests, claiming it involves a person, so it's personally. They won't tell you.
Let's have a few more new laws to provide the citizens please, not government slapp handles.
Article 59 seems relevant, other two on a quick skim don't seem to relate to the subject.
> 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
EU law largely does not regulate national security matters of the states. Though that justification is limited per international law (such as ECHR, which is the basis of many anti-government surveillance rulings by CJEU). All European Union members are part of ECHR because that is a pre-requisite for EU membership.
But ECHR is not part of EU law, especially it is not binding on the European Commission (in the context of it being a federal or seemingly federal political executive). This creates a catch-22 where member states might be violating ECHR but are mandated by EU law, though this is a very fringe consequence arising out of legal fiction and failed plans to federalize EU. Most recently, this legal fiction has become relevant in Chat Control discourse.
Great Britain and Poland have explicit opt-outs out of some European law.
> Article 59 seems relevant, other two on a quick skim don't seem to relate to the subject.
Your original take: "Should have been: AI that attempts to predict people committing crimes"
Article 42. literally:
--- start quote ---
In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof.
Therefore, risk assessments carried out with regard to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited.
In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.
--- end quote ---
> Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
Key missed point: "subject to the same cumulative conditions as referred to in paragraph 1."
Where paragraph 1 is "In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: ... list of conditions ..."
-----
In before "but governments can do whatever they want". Yes, they can, and they will. Does it mean we need to stop any and all legislation and regulation because "government will do what government will do"?
I think the EU has done better following its own rules than most other countries (not that it's perfect in any way).
Should have been
> AI that attempts to predict people committing crimes