ITU Academy & RealTyme Course: Session 1 – Legacy and Messaging Risks

ITU Academy & RealTyme Course Session 1: Legacy Mobile & Messaging Risks for Governments

Executive Summary: Session 1 of the RealTyme and ITU Academy Course exposed critical vulnerabilities in legacy mobile infrastructure and consumer messaging apps still used by many governments. From protocol-level flaws to AI-driven threats, the session outlined why sovereign, policy-driven communication systems are now essential. This blog captures the highlights and the next steps.

Across the globe, government communications are being silently compromised, not just by attackers, but by the very systems they rely on.  

Session 1 of the RealTyme and ITU Course brought together government stakeholders to confront a hard truth: the mobile communication systems many institutions rely on today are fundamentally insecure. As state actors and cybercriminal groups grow more sophisticated, the need for sovereign, policy-driven communication systems has never been more urgent.

Delivered in collaboration with the International Telecommunication Union (ITU), this first session laid the groundwork for what will be a multi-part journey: unpacking the structural flaws in mobile network infrastructure, the systemic risks of using consumer-grade tools for official purposes, and the rapidly evolving threat landscape shaped by artificial intelligence.

Governments Are Still Relying on Fragile Mobile Infrastructure

Key Focus: The Vulnerabilities in Legacy Mobile Networks

The course opened with an analysis of legacy GSM and 3G mobile networks, still in use across many government and public service environments. While their ubiquity and simplicity have made them a default choice for decades, the session demonstrated how these networks are riddled with structural vulnerabilities:

Among the most critical issues highlighted:

- GSM networks do not authenticate base stations, allowing attackers to easily set up rogue cell towers and capture all device traffic, including calls, SMS, and location data.

- Encryption is weak or entirely absent in fallback modes like 2G. Even in 3G or LTE networks, forced downgrade attacks can push devices into vulnerable states.

- Protocol-level flaws, such as unencrypted control messages, enable tracking, impersonation, and man-in-the-middle attacks.

- Metadata — who is communicating with whom, when, and from where — is exposed and can be harvested passively.

These vulnerabilities aren’t hypothetical. Participants were introduced to real-world scenarios in which adversaries can intercept, manipulate, or spoof communications by exploiting these legacy systems, including demonstrations of fake base stations and network downgrade attacks.

The recommendation was unequivocal:

"Disable 2G and legacy fallbacks for any government-sanctioned communication. They are not salvageable."

Governments that continue to depend on mobile voice and SMS, even with VPNs or secure devices, are simply building on top of a broken foundation.

Consumer Messaging Apps: Convenient, Popular and Dangerous

Rethinking the Role of Consumer Messaging Apps

The session then transitioned into the systemic misuse of consumer messaging apps like WhatsApp, Signal (including modified clones), Telegram, and others for sensitive internal and cross-border communication.

Despite offering encryption on paper, these platforms are deeply incompatible with sovereign communication needs:

- Phone number–based identity models make it impossible to separate personal from professional use, increasing the risk of impersonation and social engineering.

- Encryption gaps in backups, media, and group communications are common and often misunderstood.

- Lack of centralized governance makes it impossible for institutions to enforce access control, data retention policies, or even revoke access if a device is lost.

- Cloud-based infrastructure controlled by foreign entities introduces sovereignty and compliance risks that cannot be mitigated by settings alone.

One critical point made in the session:

Security is not just about encryption, it’s about control. If a government can’t fully control who accesses, manages, or audits its communication data, then it’s not secure.

Participants were also presented with cautionary case studies, such as:

- The TeleMessage Signal clone breach, where misconfigured or repurposed Signal forks used by governments were exploited, resulting in massive data leaks.

- Instances of WhatsApp group links shared publicly that allowed unintended participants, including journalists and foreign actors, to monitor sensitive communications.

- Legal disputes in which cloud-hosted conversations were subpoenaed from third-party providers, exposing the fallacy of “private” chats.

The takeaway? Consumer messaging apps, no matter how encrypted, were never designed to meet the operational and geopolitical requirements of sovereign institutions.

AI Is Escalating the Threat Environment

A forward-looking segment of the course explored how AI is transforming the capabilities of attackers, from lone operators to nation-state actors.

Among the emerging threats discussed:

- Deepfake impersonations of political leaders and military officials, capable of spreading disinformation or issuing false commands.

- AI-generated phishing and social engineering that mimics writing styles, language preferences, and interpersonal dynamics to target officials with near-perfect lures.

- Automated traffic analysis and metadata harvesting at a scale far beyond human capability, powered by large language models and machine learning.

In this new environment, attackers no longer need to breach the content of communications. They can infer intent, location, decision-making timelines, and internal hierarchies simply from patterns in communication behavior.

This demands a fundamental rethink of what constitutes secure communication.

"Governments must now defend not only the message, but also the metadata, context, and relationships surrounding it."

Session 1 Takeaways: A New Framework for Government Communication

Session 1 closed with a clear set of principles to guide governments in modernizing their communication infrastructure:

1. Ditch Legacy Infrastructure

- Sunset GSM and SS7-reliant services.

- Avoid fallback modes on devices.

- Audit mobile fleet configurations to downgrade vulnerabilities.

2. Eliminate Consumer Apps from Official Use

- Ban the use of WhatsApp, Signal clones, Telegram, and similar tools for any government or institutional purpose.

- Establish policies and audits to prevent shadow IT communication channels.

3. Adopt Purpose-Built Secure Communication Platforms

A modern, sovereign communication platform must offer:

  • End-to-end encryption with post-quantum readiness
  • Mutual identity verification without reliance on phone numbers
  • Granular access control and auditability
  • Metadata protection and traffic obfuscation
  • Sovereign infrastructure hosting and compliance alignment

These capabilities are not “nice to have”. They are essential for any government wishing to protect the confidentiality, integrity, and strategic advantage of its internal communications.

Governance Gaps: Security Is Institutional

While technical vulnerabilities dominated much of Session 1, another layer of risk became increasingly clear: the lack of institutional governance around communication tools.

Governments are often forced to operate in complex, fragmented digital environments, where employees, departments, or even entire ministries adopt tools out of convenience, not policy. This “shadow IT” approach creates dangerous blind spots:

- No unified communication policy across agencies. Many departments use different platforms with inconsistent security postures.

- Lack of training and awareness. Staff often aren't briefed on the risks of using default apps or default settings for sensitive communication.

- Insufficient oversight. Without central audit logs, metadata visibility, or permission controls, it's impossible to track or investigate breaches effectively.

- Cross-border exposure. When officials use apps hosted on foreign servers, sensitive data may be subject to foreign surveillance laws, often without anyone realizing.

This governance failure is not due to negligence, but rather a mismatch between today’s threat landscape and yesterday’s communication norms. Tools like WhatsApp and Signal may be encrypted, but encryption alone doesn’t make a system secure if it can’t be governed, audited, or trusted under sovereign control.

Sovereignty Means Centralized Oversight

A key takeaway from this portion of the course was that true security lies in institutional control. That includes:

- Defining approved tools and enforcing their use through policy.

- Establish chain-of-command visibility over how sensitive information moves across teams.

- Building systems that respect jurisdictional sovereignty while maintaining operational agility.

Without these guardrails, even the most encrypted apps can become risks, not protections.

Looking Ahead

By the end of Session 1, AI-linked Threats to Government Communications were treated.  

We used to think of cyber threats as technical problems—firewalls, malware, bad links. AI doesn’t just scale attacks — it personalizes them.  

During the session we depict an onion- layered approach to understanding how AI tools can be misused in increasingly sophisticated cyberattacks. Each layer builds upon the previous one, creating a complex and dangerous threat landscape.

- Layer 1: Identity & Voice Deepfake This layer highlights the misuse of AI to create highly realistic fake identities and voices. Deepfake technology allows attackers to convincingly impersonate individuals, leading to more effective phishing and social engineering attacks.

- Layer 2: AI-phishing automated social engineering Building on the fake identities, this layer shows how AI can automate and personalize phishing attacks. AI tools can analyze social media profiles and other data to craft highly targeted and believable messages, increasing the likelihood of success.

- Layer 3: Data Exposure via AI Assistants This layer represents the risks associated with the increasing use of AI assistants. These assistants can inadvertently expose sensitive data if not properly secured, providing attackers with valuable information.

- Layer 4: AI-Powered Reconnaissance Patterns AI tools can be used to scan for vulnerabilities and identify patterns in target systems. This layer represents how AI can automate and accelerate the reconnaissance phase of an attack, allowing attackers to quickly find and exploit weaknesses.

- Layer 5: Targetted Influence & Misinformation The outermost layer represents the use of AI to spread misinformation and manipulate public opinion. AI-powered bots can create and disseminate fake news and propaganda, influencing individuals and organizations.

“As the course director, I urge you to re-evaluate the very core of our cybersecurity models. The familiar 'onion' of defense now conceals new, insidious layers, each powered by the malicious application of AI: from the synthetic deception of deepfakes and the cunning of automated AI-phishing, to the quiet treachery of data exposure via AI assistants, the precise foresight of AI-powered reconnaissance, and the pervasive shadow of targeted influence and misinformation. Understanding these evolving, intelligent layers is paramount to securing our digital future." - says Francois Rodriguez, Course Director.

The next Session 2 of the RealTyme–ITU Academy Course will dive into the data ownership challenges – Data sovereignty, AI and compliance risks. To request a private briefing or join our future training and course sessions for secure communication, reach out to our team.

Stay tuned!

You may also like