> Peter's AI policy work has mostly been on setting sound policies around high-stakes machine learning applications such as recidivism prediction, self-driving vehicles, cybersecurity, and military uses of AI. He also has an interest in measuring progress in the field as a whole. His technical projects have included SafeLife, a benchmark environment for reinforcement learning safety; studying the need and role for uncertainty in ethical objectives of powerful optimising systems, and evaluating calibration and overconfidence in large language models.
What utterly valuable work. I did not know of his existence til now, but I remember when I first used LetsEncrypt to get a cert for my website. It was so much easier than it had been before, and it was free.
What utterly valuable work. I did not know of his existence til now, but I remember when I first used LetsEncrypt to get a cert for my website. It was so much easier than it had been before, and it was free.
And as I have thought of so much lately, developing compassionate, sound policy for the technology we create is so often lacking in our work. https://pde.is/posts/docs/Report-on-Algorithmic-Risk-Assessm...
I am sorry not to have known of him while he was here, and I am grateful for his work.