Introduction
Organizations worldwide are taking charge of their data to build artificial intelligence systems tailored to their specific needs. This shift brings a critical challenge: how to maintain ownership while ensuring a safe and trusted flow of high-quality data required for reliable insights. Discussions at the recent MIT Technology Review EmTech AI conference shed light on how AI factories are unlocking new levels of scale, sustainability, and governance. These developments position data sovereignty as a strategic imperative for both governments and enterprises.

The Strategic Imperative of Data Sovereignty
Data sovereignty—the concept that data is subject to the laws and governance structures of the nation where it is collected—has moved from an IT footnote to a boardroom priority. For governments, controlling sensitive citizen data is a matter of national security and regulatory compliance. For enterprises, it protects intellectual property and enables competitive differentiation through custom AI models. The challenge lies in operationalizing this control without sacrificing the data velocity needed for modern machine learning.
AI Factories: Enabling Scale and Governance
The emerging concept of the “AI factory” offers a solution. Much like a traditional factory transforms raw materials into finished goods, an AI factory turns raw data into trained models and actionable insights—at industrial scale. These facilities combine high-performance computing (HPC), massive storage, optimized networking, and robust governance frameworks. They allow organizations to build, train, and deploy AI systems securely within their own infrastructure, ensuring data never leaves controlled environments unless explicitly allowed.
Sustainability at Scale
Modern AI factories also address sustainability. By leveraging liquid cooling, efficient power management, and renewable energy sources, they reduce the carbon footprint of large-scale training. This aligns with corporate ESG goals and regulatory pressures on energy consumption.
Balancing Ownership with Data Flow
Ownership is meaningless if data cannot move freely enough to train robust models. Organizations must strike a balance: they need high-quality, diverse datasets from multiple sources, yet they must apply strict access controls. This requires sophisticated data governance—metadata management, provenance tracking, fine-grained access policies, and secure sharing mechanisms. The result is a trusted data ecosystem where insights flourish without compromising sovereignty.
Expert Perspectives from EmTech AI
Two featured speakers at the conference provided deep expertise on these topics.
Chris Davidson, VP of HPC and AI Customer Solutions, HPE
Chris Davidson leads Hewlett Packard Enterprise’s global strategy for AI Factory solutions and Sovereign AI. He works with governments, enterprises, and research institutions to build secure, scalable national- and enterprise-grade AI capabilities. At EmTech AI, he discussed how AI factories unlock scale while preserving data control. Davidson also directs product management and performance engineering across HPE’s HPC and AI portfolio, which includes large-model training platforms and Cray exascale systems. His teams define product strategy, performance architecture, and deployment models that position HPE at the forefront of high-performance and AI computing. Over nine years at HPE, he has led initiatives in performance engineering, AI cloud, and professional services. Previously, he held technical and leadership roles in biotech and medical diagnostics. Davidson holds an MBA in Entrepreneurship and Finance and a BS in Biology from Loyola University Chicago.

Arjun Shankar, Division Director, National Center for Computational Science, ORNL
Mallikarjun (Arjun) Shankar is the Division Director for the National Center for Computational Science at Oak Ridge National Laboratory. His research bridges computer science and large-scale scientific discovery campaigns that rely on scalable computing and data science. He is a joint faculty appointee at the University of Tennessee’s Bredesen Center, a senior member of the IEEE, and a senior member of the ACM. At the conference, he shared insights on how national labs are operationalizing AI for scientific research while maintaining data sovereignty.
Governments Leading the Way
Nations such as France, Japan, and Saudi Arabia have announced national AI strategies that include building sovereign AI factories. These investments ensure that sensitive data remains under local jurisdiction while enabling advanced capabilities in health, defense, and public services. The approach mirrors the early days of cloud computing but with a stronger emphasis on localization and control.
Enterprises: Custom AI, Competitive Edge
Enterprises are following suit. Banks, pharmaceutical firms, and manufacturers are deploying private AI factories to train models on proprietary data. This allows them to create unique insights—for example, a bank detecting fraud using its own transaction patterns without exposing customer data to third-party cloud providers. The strategic advantage lies in speed, customization, and data sovereignty.
Conclusion: The Path Forward
Operationalizing AI for scale and sovereignty is no longer optional—it is a strategic necessity. By adopting AI factory architectures, organizations can achieve the scale needed for advanced AI while maintaining the governance and control that modern regulations demand. The conversations at EmTech AI underscore that the future of AI is not just about algorithms and hardware, but about trust, ownership, and the intelligent flow of data. As more players embrace this model, the balance between openness and security will define the next era of artificial intelligence.