Why Confidential AI is on the rise: Intel’s Anand Pashupathy explains

Updated on 02-May-2024

If 2023 was all about getting enamoured and captivated by the rise of Generative AI left, right and centre, then 2024 and beyond is most definitely about securing AI–from chip to software applications and beyond–to the maximum. In fact, in today’s AI dominated tech landscape, maintaining the confidentiality and privacy of data has become a paramount concern, since digital threats loom as prominently as innovations. Amidst this backdrop, I unpacked the concept of Confidential AI with Anand Pashupathy, a key figure at Intel driving the frontier of secure AI workloads and applications. Our discussion highlighted an emerging paradigm that promises to reshape how we manage and secure data in the age of ubiquitous AI.

What is Confidential AI

Anand Pashupathy, Vice President & General Manager, Security Software & Services Division at Intel, begins by explaining the inception and necessity of Confidential AI, a term that may still sound like jargon even to the seasoned ears. “Intel looks at AI and security as kind of being at this intersection,” Anand remarks. The intersection he refers to is not just about using AI for enhancing security measures or securing AI itself–it’s about a holistic approach to safeguard data during AI operations.

Also read: Intel’s Sandra Rivera on future of AI, data centers and India’s tech moment

Confidential AI emerges from the broader concept of confidential computing, which focuses on protecting data while it’s being processed, not just when it’s stored or in transit. In simpler terms, it’s about ensuring sensitive data privacy throughout the AI lifecycle, from training to inference. Anand illustrates, “Confidential computing ensures that data is opaque inside a trusted execution environment, making it secure from external threats even during computation.” This foundation is pivotal for AI systems where data often traverses and transforms through various states and operations.

Delving deeper, Anand explains the mechanics of how Confidential AI works, highlighting its role in securing AI operations that span across industries, from healthcare to financial services. He talks about a scenario that is becoming increasingly common: AI models being tampered with, which could lead to disastrous outcomes. “About 86 percent of enterprises are concerned about the security of their models,” he suggests, based on a Forrester report, underscoring the widespread data-security related anxiety that plagues industries relying on AI.

Confidential AI addresses these fears head-on by implementing layers of security within the computing processes that host and handle AI models. This involves sophisticated encryption techniques not just when the data is at rest or in motion, but crucially, when it is active and being processed. “When data is operated upon, it has to be unencrypted traditionally, which is where the vulnerability creeps in. Confidential AI counters this by maintaining encryption even during computation within a secure environment,” Anand explains.

Confidential AI: Real-world applications and challenges

The conversation shifts to the real-world applications of Confidential AI, where Anand brings examples from current deployments that highlight its transformative potential. “There are companies like Anonym, Opaque, and Decentric that are pioneering the application of confidential computing to AI,” he notes, pointing out that these efforts are paving the way for broader adoption across sectors.

One of the most critical applications lies in the realm of federated learning, a method where AI models are collaboratively adjusted and improved upon using diverse datasets from multiple sources without actually sharing the data itself. “This model ensures that your data remains within your control, only contributing to the model’s learning without exposing the underlying data,” Anand elaborates. This not only enhances privacy but also amplifies the collective intelligence of AI systems without compromising security.

Despite its promising applications, Confidential AI is not without challenges. The integration of robust security measures within AI processes can lead to performance trade-offs in terms of higher latency. Anand acknowledges this, saying, “There is absolutely a hit on performance when it comes to confidential computing, but it’s a small price to pay for significant gains in security.”

This is where Intel’s making significant technology contributions to advance secure and collaborative AI solutions, according to Anand. He mentioned Intel’s fourth generation Xeon scalable processor with AMX (Advanced Matrix Extension) technology, which accelerates AI algorithms for learning and inferencing using built-in hardware capabilities. Additionally, Intel SecureGuard extensions provide a trusted execution environment solution, and Intel Trust Authority serves as the foundation for these capabilities. These features are being leveraged by Intel’s partners to develop solutions like data clean rooms, federated AI, and collaborative AI.

Also read: Intel Vision 2024: Intel Gaudi 3 boasts 50% better performance over Nvidia H100 AI processor 

He further elaborates on the strategies to mitigate these impacts, such as optimising the operations within trusted execution environments to minimise performance degradation. “Partners like Fortanix are fine-tuning these environments so that the performance hit is less than what you might expect,” he adds, indicating the performance loss to be in the low single digit percentage. The Fortranix approach focuses on minimising exits from the trusted execution environment (TEE), where the performance degradation occurs. By batching these exits, the performance hit is incurred only once, significantly reducing the overall impact, according to Anand. 

Anand recognised that some industries might be intolerant of even a minor performance decrease. In such cases, he emphasises the importance of focusing on the security benefits. “Would you rather have your model getting lost or stolen or somebody poisoning your model or somebody stealing your IP (which is really your model)? Wouldn’t you take that 1-2 percent performance hit over something as fundamental as an unsecure tech stack that allows bad actors to steal your model and destroy your entire business? I would submit that for a small price to pay for degradation and performance, you get significantly large amounts of security benefit,” Anand emphasises, recommending that prioritising security with a small performance trade-off is a wise decision for protecting intellectual property.

The future is full of Confidential AI

Anand offers compelling insights into the future trajectory of Confidential AI, projecting a vision that could redefine security paradigms from the cloud all the way to the edge of computing, painting a broad canvas of possibilities for the integration of AI and security technologies.

“Intel is pushing the boundaries on delivering Confidential AI from edge to cloud,” Anand shares, emphasising the extensive work being done to bring this technology into everyday applications. This isn’t just about keeping data secure in a central server or cloud infrastructure; it’s about creating a seamless tapestry of security that blankets every node of the computing network, according to Anand. This strategic inclusion signifies a shift towards a more robust security framework embedded directly into hardware and software architectures of both products and platforms at scale.

The practical applications of such advancements are far-reaching. As Anand points out, “You don’t always send everything to the hyperscaler cloud. There are going to be edge cloud deployments.” This is particularly crucial as the amount of data processed outside traditional data centres continues to grow, driven by the proliferation of IoT devices and mobile computing platforms. Here, Confidential AI can play a pivotal role in ensuring that data remains secure, whether it’s being processed locally on a PC or at an edge server in a factory.

Also read: 5 things Intel revealed about our AI future

Confidential AI’s potential to secure data across the entire computing spectrum fundamentally changes how businesses and consumers will interact with technology. Anand is optimistic about the democratisation of this technology, predicting, “Confidential AI will trickle down to PCs and edge nodes from the hyperscaler environments, ensuring that companies like Intel protect your data across the entire spectrum.”

However, the integration of Confidential AI into everyday devices and applications also brings challenges, particularly in terms of user control and consent. Anand reassures that users will have “complete access to what they want to share or not share,” allowing individuals to retain control over their digital footprints. This user-centric approach is vital as it respects individual privacy while harnessing the benefits of AI.

Concluding my conversation with Intel’s Anand Pashupathy, navigating the complex terrain of Confidential AI, its critical importance in today’s digital age becomes increasingly clear. As AI continues to slowly permeate into every facet of our lives, it’s the principles of Confidential AI, in many ways, that will govern how this transformative technology progresses and evolves in a manner that is secure, ethical, and beneficial for all.

Disclaimer: Digit, like all other media houses, gives you links to online stores which contain embedded affiliate information, which allows us to get a tiny percentage of your purchase back from the online store. We urge all our readers to use our Buy button links to make their purchases as a way of supporting our work. If you are a user who already does this, thank you for supporting and keeping unbiased technology journalism alive in India.
Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant.

Connect On :