Data Privacy in AI

How to protect sensitive data when leveraging AI services in the cloud.

(And an example of one of the solutions that we came up with to navigate through all of that, called SantaAI.)

Santa asked BRIO’s team to build a tool that will create his naughty or nice list for him, and the team knew that with his global audience, Santa’s AI had some data privacy issues to consider.

Every year, Santa has to read through hundreds of millions of Child Behavioral Reports in order to compose his naughty and nice list… and then he has to check it twice! He’s heard a lot of buzz in the North Pole about AI, and he’s wondering if he can leverage it to classify who’s been naughty and who’s been nice.

Using Data De-Identification for Data Privacy Compliance

De-identifying data before using it with a generative AI mode can reduce risk, and de-identified data is generally unregulated by privacy laws. Methods include deletion, generalization, encryption, data masking, and pseudonymization.

Learn more about how we incorporated privacy controls with the latest AI models with our presentation on Data Privacy in AI.  The sample data and an example implementation can be found on our DataPrivacyInAI Github repository.  

And don’t forget… check out SantaAI to see if you or your children have been naughty or nice this year!

This presentation is from a Philadelphia Cloud Technologies Users Group meetup event. Join the meetup group to participate in future events.

Want to learn more about this topic? Let's talk. Say hello!

Share This

Subscribe to BRIO’s Blog for News, Resources, and Other Valuable Content

Ready to start innovating?

Tell us about your project!

Hey, thanks for subscribing!

We'll say hello when we have important news and updates.