As high-performance computing and communications technology progresses, a wider range of artificial intelligence applications shows promise. Through collected data , training models built in the cloud and inference models stored in edge devices are making machines smarter and more suitable for different environments and uses. Examples include autonomous vehicles, drones, factory automation, robotic surgery, AI-assisted medical diagnostics and many more.
These innovations should make our lives so much simpler. However, if such systems are compromised by adversaries or hackers, all these advantages will instead become risks to our safety. Without security, AI will be a disaster.
In this presentation, I will describe some of the risks involving AI without security. I will also present the requirements for secure AI. We will examine threat models and remedies. More importantly, I will demonstrate how our innovative PUF (Physically Unclonable Function) technology can be used as a chip fingerprint to serve as a root-of-trust for AI security applications.