AI and EU Regulation – June 26th, 2023

There is a general theme and concern around discussions about AI, and it comes back to how these programs are taking swaths of data to generate their output. Obviously, there is some extremely innovative and complex programming behind it, but it’s the data source that is getting the most scrutiny.

I had the pleasure of hosting the Ethics Emporium Symposium for CPA Ontario. One of the speakers, Brittany Kaiser, was talking about Cambridge Analytica and how she was the whistleblower for Cambridge Analytica. In that case, individuals’ personal information was used by Cambridge Analytica for purposes that the original owners didn’t intend.

This was Facebook and other social media outlets packaging data and being pretty loose about how information was being passed along to third parties. It opened up the door for conversation about where the source information comes from to do data analysis, deep learning, machine learning, generative AI, etc. That’s really at the heart of AI regulation.

AI regulation is not necessarily to de-curtail AI, including generative AI development and innovation, because that door has already been opened, and it can’t be closed now. The question is, can we build a perimeter for what AI can do? That’s what the EU is starting to pass.

The EU is in the first stages of passing a law to regulate AI. There is complexity around this because there is a fine line between regulating AI and making sure that it has room to become innovative, and the regulation doesn’t necessarily cut off the creativity and potential uses for it. One of the biggest things that the regulation is trying to do is to say: Where are you getting the data? How are you being supervised? Do you have the rights to that source data?