Keeping AI privacy front and center to create better patient experiences, products, and standards (Part 2/2)

About the episode

Every day there’s a new AI solution being touted. But how secure is the technology? And how quickly could your data, your patients' data, and other critical information become compromised?
Patricia Thaine, CEO & Co-founder of Private AI,  joins Vinod Subramanian to dive deep into her pioneering work to create privacy for tech that doesn’t yet have standards and regulations. This two-part conversation covers the four pillars of privacy,  privacy-first product design, how healthcare professionals will be able to enhance the care they provide, the global AI landscape, and so much more. 

Skip ahead

[00:30] What industries are moving quickly to apply AI while keeping privacy front and center?
[02:00] Developing a privacy-first mentality
[04:22] Privacy engineering: is this a real thing?
[08:45] Thinking about privacy in the form of anti-bias, not just an aspect of security
[09:43] Defining the right requirements for product development with a privacy-first mentality
[09:43] Defining the right requirements for product development with a privacy-first mentality
[13:14] Diving into the pro’s and con’s of synthetic data
[16:08] In the world of Generative AI, what should be their foundational process for privacy? What are some examples of when privacy wasn’t considered?
[19:57] Looking to the future, will we all be able to select our level of privacy when using AI?
[22:03] Understanding the global view of AI privacy standards and AI utilization
[28:00] Using responsible AI in healthcare to develop algorithms that can help enhance care
[29:27] Will the role of healthcare professionals change as AI algorithm tech becomes more privacy aware?

[32:25] Advice to product leaders in the world of AI