ITU Academy & RealTyme Course: Session 2 - Data Ownership Challenge

ITU Academy & Realtyme Course Session 2 - Data Ownership Challenge for Governments

In the second session of our course with ITU Academy, we turned our attention from legacy GSM and messaging systems to a rapidly evolving and far more complex frontier: data ownership in the age of AI. Our discussions shifted from what we’re leaving behind to what we must urgently design ahead, because the systems we adopt now will define not only how governments communicate, but who ultimately controls that communication.

The focus was clear: data sovereignty, compliance, and the expanding risk surface introduced by AI-enhanced tools. But what emerged most sharply was the realization that governance can no longer rely solely on regulatory compliance. It must be reinforced with operational control and design foresight.

From Consumer Convenience to Institutional Risk

We began the session by revisiting a growing trend: the adaptation of consumer messaging apps, particularly open-source ones, for public sector use. These platforms are widely available, intuitive, and even free to modify. Across the globe, governments are asking: “Why not just build our own secure messaging app based on something like Signal or Matrix?”

At first glance, this seems like a logical shortcut. But beneath the surface lies a critical misconception: that security is transferable. That the safety of the original platform extends to the forked version. Unfortunately, it doesn’t.

Open-source software, while transparent and community-driven, is not inherently secure, especially at the scale and sensitivity of national government use. When a government forks a consumer messaging app to create a bespoke solution, they also assume full operational responsibility for its security, performance, and compliance posture. Without specialized teams, rigorous threat modeling, and continuous audits, these “quick wins” can easily become long-term liabilities.

We looked at examples where open-source forks were deployed under the assumption of baked-in security, only to later expose user data, metadata, or internal communications due to misconfigurations, outdated dependencies, or lack of clear vendor accountability.

The lesson? Creating a secure tool is not about copying code. It's about mastering operational security end-to-end.

Enterprise Collaboration: Feature-Rich, But Sovereignty-Poor

As our discussion progressed, we turned to the other end of the spectrum: enterprise collaboration platforms. Tools like Zoom, Microsoft Teams, and Google Meet have become essential for government collaboration, especially in hybrid or remote environments. These platforms are AI-enhanced, feature-rich, and often compliant with major regulatory frameworks.

Yet beneath this polished surface lies a set of serious questions: Where is the data going? Who owns the analytics? Can the government truly enforce its own rules on platforms it doesn’t control?

In the early days of these tools, many were designed with some degree of data sovereignty in mind. On-premises hosting and private deployments are available for high-sensitivity use cases. But in the race to innovate and scale globally, most of these platforms have now moved aggressively toward cloud-first architectures, often hosted across multiple jurisdictions.

This creates a paradox: governments may be using tools that are technically “compliant,” but they’ve relinquished sovereign control over their communications. Compliance, we emphasized in this session, is not the same as sovereignty. The former ensures you're following legal guidelines, the latter ensures you own and govern your own digital ecosystem.

A Typical Government Workday and The Invisible Risk Surface

To bring this into focus, we walked through a fictional, yet highly realistic, scenario: a typical day in the life of a government officer.

In a single day, that official might:

- Make a GSM call to a colleague in another department

- Coordinate internally using a consumer messaging app

- Join a video meeting with AI-driven transcription and behavior analysis features

These actions may seem routine, even harmless. But from a data perspective, each one leaves behind metadata, content, and behavioral insights, scattered across various networks and jurisdictions. Some are stored on-premise. Others go to the cloud. Some remain encrypted. Others might be available to vendors via debug logs or algorithmic “telemetry.”

And once AI enters the picture, scoring productivity, performing silent sentiment analysis, auto-classifying documents, the exposure increases dramatically. The officer’s communication is no longer just content. It becomes interpreted data, shaped and archived by tools whose logic and memory the government may not control.

AI: The New Operating System of Government Communication

AI is not simply an add-on to communication platforms; it is quickly becoming the core operating layer. Today’s systems autocomplete responses, flag anomalies, filter messages, and even route or redact documents based on inferred sensitivity.

In many ways, this offers extraordinary potential for public sector efficiency, but it also introduces profound new risks. As we discussed, an AI that is not sovereign does more than assist: it observes, learns, and potentially reports elsewhere.

Consider the analogy we used during the session: imagine a smart home where the locks, lights, and routines are controlled by a cloud platform. Now imagine that this smart system is sending usage patterns and behavioral insights to a company, or a government, in another country.

Would you trust that smart home? More importantly, would you trust it to run your foreign ministry?

Without sovereign AI, that is, AI that is trained on approved data, hosted on national infrastructure, governed by auditable frameworks, and shielded from unauthorized access, governments risk handing over operational intelligence about their own internal behavior.

Beyond the User: The Rise of Agentic AI

One of the most forward-looking discussions in Session 2 revolved around Agentic AI — systems that don’t just support users but act on their behalf.

Imagine telling your system:
“Schedule a ministerial meeting, notify all attendees, prepare briefing materials, and push them to the messaging app.”
Agentic platforms can already do this, connecting calendars, documents, and communications into a seamless workflow. But the question that followed was sobering:

Who owns the logic that decides what gets scheduled, who gets notified, and how that content is prioritized?

As governments adopt these tools, they risk embedding foreign operational logic deep into their internal processes. And since most agentic systems rely on public-cloud deployment, their actions are potentially monitored, trained, and improved externally, outside of sovereign control.

This makes the concept of governance-by-design essential. We must shift from simply adopting what’s available to specifying what is acceptable, across behavior, architecture, and policy.

As our course director, Francois Rodriguez, noted during the session:

“Governments today are not just choosing tools. They are shaping the logic that will govern their decisions tomorrow. If that logic isn’t sovereign, neither is the outcome.”

In an age where AI systems can make scheduling decisions, flag message tone, and score meeting productivity, even passive governance isn’t enough. You must actively define and enforce your system’s logic or accept that someone else already has.

Sovereignty vs. Compliance: Owning vs. Renting Your Digital Infrastructure

This session challenged us to rethink the foundational difference between compliance and sovereignty. If compliance is your building code, sovereignty is your land title.

Compliance means your digital house has fire exits, locks, and smoke detectors. But it doesn't mean someone else doesn’t hold the key or see through your windows. Sovereignty means you choose where your house is built, who enters, and what happens inside.

Too often, governments conflate compliance with control. But the accelerating complexity of AI, public cloud hosting, and global data regulation demands a more assertive posture: sovereign-by-design communication systems.

A digital illustration of a world map highlighting that Africa accounts for only 2% of global colocation data centers, emphasizing the continent’s limited digital infrastructure and the risks of foreign data control and legal exposure under laws like the U.S. CLOUD Act.

These systems must be:

- Hosted on local or national cloud infrastructure

- Transparent in vendor contracts and audit logs

- Governed by AI policies that anticipate not just current, but future threats

- Equipped with real-time visibility into data processing, storage, and inference

Designing a Sovereign Architecture — From Policy to Platform

A truly sovereign communication environment must be intentionally designed. That means:

- Hosting data in-country or in trusted regions

- Retaining cryptographic key ownership

- Ensuring all AI models are locally trained and explainable

- Auditing vendor behavior and service logs in real-time

This is not just about security. This is about institutional self-determination in the digital age.

The Post-Quantum Shift: Why Action Is Needed Now

We concluded the session with a forward-looking warning: quantum computing is not decades away, but it’s already here. While not yet widespread, the computational breakthroughs in this field pose an existential threat to current encryption standards.

Most government communications today rely on RSA, ECC, or other public-key cryptography methods that will not survive the quantum age. And because sensitive communications must remain protected not just today, but for decades to come, the transition to post-quantum cryptography must begin immediately.

This is not a retrospective vulnerability waiting to be exploited.

Final Thoughts: Designing for Sovereignty, not Just Function

Session 2 served as a clear reminder that modern government communication is no longer just about speed, convenience, or even regulatory compliance. It’s about designing systems with sovereignty in mind, from the ground up.

Consumer apps, even those derived from open source, must be approached with rigor and caution. Enterprise collaboration tools offer impressive features, but often at the cost of data control. AI enhances everything it touches, but it must be a recruit, not a spy.

Because in the age of agentic AI, behavioral analytics, and quantum decryption, compliance alone is not enough. What matters is who governs the infrastructure and who the infrastructure ultimately serves.

The future will not wait. And neither should we.

Missed Session 1? Catch up on Legacy and Messaging Risks here.  

To request a private briefing or join our future training and course sessions for secure communication, reach out to our team.

Stay tuned for Session 3!

You may also like