Understanding data privacy and its critical - but often overlooked - role in AI (Part 1/2)

About the episode

Every day there’s a new AI solution being touted. But how secure is the technology? And how quickly could your data, your patients' data, and other critical information become compromised?
Patricia Thaine, CEO & Co-founder of Private AI,  joins Vinod Subramanian to dive deep into her pioneering work to create privacy for tech that doesn’t yet have standards and regulations. This two-part conversation covers the four pillars of privacy,  privacy-first product design, how healthcare professionals will be able to enhance the care they provide, the global AI landscape, and so much more. 

Skip ahead

[00:45] Introductions
[07:08] The potential for data privacy and compliance as the first tenant in the development of AI solutions
[09:35] Diving into developing a privacy layer for AI
[11:44] Making the distinction for healthcare between data privacy, both PII and PHI
[13:56] Regarding data privacy, is it better to trust humans or technology?
[15:40] Pioneering, testing, and building proof points all while driving adoption
[18:50] How data privacy and responsible AI can evolve together
[20:50] The four pillars of AI privacy
[23:51] Defining input data privacy in the era of Generative AI
[25:45] Ensuring encryption during solution development
Stay tuned for part 2 of the conversation between Patricia Thaine and Vinod Subramanian!