Privacy and AI
◈ Last edit: June 3, 2021
◈ Privacy+AI Research, Presentations
◈ Privacy+AI Podcast Dataset

Unique data. Hand-curated.
Presentations on the Regulation of Artificial Intelligence
Data Summit Connect: May 12, 2021 The Regulation of AI & Algorithms Video * Slides with Jeff Jockisch of PrivacyPlan |
Open Security Summit: May 12, 2021 AI ML European Regulation Video * Slides with Jeff Jockisch, Federico Marengo, Adam Leon Smith |
New presentation at PrivSec Panel discussion on ‘What the EU AI Regulation means for businesses‘ coming June 23rd
Machine Learning, Explainability, and the Regulation of AI
When is a hot dog not a hot dog? Why, when the Bayesian classifier finds enough points of differentiation to cross a probabilistic threshold, of course!
I have significant concerns that as we approach the regulation of AI and algorithms with the proposed EU AI Act and other forthcoming legislation, we may fail in our attempts to create the kind of transparency required for 1) ethical AI, 2) effective regulation, and 3) consumer trust-building.
This concern centers on the black box of Machine Learning. By some accounts, the more accurate the prediction, the more unexplainable the result. This black box problem means that explainability in AI systems will not be readily apparent to the system operators. It is unlikely to be apparent even to the developers. How can it be apparent to regulators?
As a result, the only way to know if most AI systems are creating disparate outcomes is to do extensive testing; Not simply testing in the development of the system, but likely testing as an ongoing cost of operation. Even in these cases, it is unlikely that an advanced AI system in Hiring and Consumer Credit, for instance, will ever really be able to point to a factor or two and say: “This is why you did not qualify.” ML output doesn’t currently come with this kind of factor analysis; The black box is mysterious, but in the longer term, it doesn’t necessarily need to be.
Regulations like the EU AI propose that high-risk systems have at least two humans in the loop to review machine output. Human in the loop is an excellent precaution and will help in many ways. But can it help with explainability? I don’t think so. Humans looking at inscrutable results will not add much insight.
TL;DR: “Houston, we have a problem with explainability.”
270 Privacy Podcasts that Cover Artificial Intelligence
Podcast Title | Host 1 | Host 2 | Episode Title | Pub Date | Length | Guest 1 | Guest 2 | AI Notes |
---|
You can view and download the full dataset of Privacy Podcasts that relate to AI at this Google Sheet location: http://bit.ly/PrivacyAICasts
Other Data Sets from PrivacyPlan
ForHumanity: AI Audits

If you have an interest in auditing AI systems, you should check out ForHumanity. They are designing third-party, independent audits for AI systems and developing certification programs for GDPR, Age Appropriate Design, and more.
ForHumanity Mitigating risk from the perspective of Ethics, Bias, Privacy, Trust, and Cybersecurity in our autonomous systems GDPR Cert * AI Audits |