Back to Field Notes
AnalysisMarch 23, 202613 min read

Detecting Coordinated Influence Operations in Philippine Social Media

From Beijing-trained bloggers to AI-powered cyborg networks: how OSINT techniques can identify, map, and document the disinformation campaigns targeting Filipino public opinion.

By OSINT.PH Team
Share

Key Takeaways

  • -China actively recruits Filipino influencers through state-funded training programs
  • -Spamouflage network deployed deepfake video of President Marcos using accounts active only during Beijing business hours
  • -Cyborg operations combine human influencers, troll accounts, and AI-generated content
  • -POGO-linked funding, CCP united front groups, and coordinated rally networks form a broader influence ecosystem
  • -Five behavioral red flags and technical detection methods for mapping these campaigns

Two Fronts, One Problem

Philippine social media faces coordinated influence operations on two fronts. The first is foreign: China actively works to shape Filipino public opinion on the West Philippine Sea dispute through trained influencers, fake news pages, and proxy firms. The second is domestic: political influence machines deploy cyborg networks, part human and part machine, to manipulate public perception around elections, impeachment proceedings, and accountability.

These operations are not speculation. In 2020, Facebook took down 155 accounts with 130,000 followers, confirming they originated from China. In March 2025, several pro-Duterte vloggers admitted in a congressional hearing to attending a China-funded training seminar in Beijing. A Reuters investigation revealed the Chinese Embassy hired a Manila-based firm to boost messaging using fake accounts. Researchers have also documented AI-powered networks mass-producing deepfakes, flooding comment sections, and deploying pre-written copypasta campaigns within hours of political events.

The question for investigators is how to detect, document, and map these operations systematically.

Part 1: The Pro-China Playbook

How Beijing Shapes Filipino Opinion

China's approach has evolved. Early attempts at direct messaging, like a Chinese Embassy music video during the pandemic, generated backlash. The strategy shifted to using Filipino voices as intermediaries, making propaganda appear organic.

A 2024 AidData report documented how Beijing uses intermediaries to shape narratives "without direct attribution." The model works through layers: state-funded training programs produce sympathetic influencers, who produce content amplified by networks of fake and real accounts, supported by pseudo-news websites that provide a veneer of journalistic credibility.

Case Study: The Spamouflage Deepfake Operation

In July 2024, hours before President Marcos delivered his State of the Nation Address, a deepfake video depicting him using illicit drugs was circulated by Duterte supporters. The video was debunked by Philippine authorities, but not before spreading rapidly across social media.

Research by the Australian Strategic Policy Institute (ASPI) identified at least 80 inauthentic accounts on X and 11 YouTube videos amplifying the deepfake. ASPI assessed them as very likely linked to Spamouflage, a covert social media network operated by China's Ministry of Public Security.

The accounts left clear digital fingerprints:

  • Activity timing: Almost all accounts were active only during Beijing business hours, from 9:19 a.m. to just before 4 p.m., with reduced activity from noon to 2 p.m. consistent with a lunch break
  • Account characteristics: Typically created in 2024, using female Western names like "Susan Jones" or "Leesa Tydeman"
  • Linguistic tells: One account shared the video with text containing a double-byte comma (common in Chinese, Japanese, and Korean fonts) rather than a standard English comma
  • Cross-campaign activity: The same accounts also promoted a book asserting China's South China Sea sovereignty claims and amplified CCP talking points about the Philippines being a "U.S. proxy"

ASPI also identified a novel tactic: the Spamouflage network was seeding articles by unknown freelancers in legitimate Taiwanese and Hong Kong news outlets, then amplifying those articles across social media. This gave CCP narratives the appearance of independent journalism.

The deepfake itself was first shared by a Filipino-American vlogger at a pro-Duterte rally in Los Angeles. Whether the CCP directly supported the video's creation or simply amplified it opportunistically, the coordination between domestic actors and Chinese state-linked accounts demonstrates the layered nature of these operations.

The POGO-CCP Connection

The influence ecosystem extends beyond social media. ASPI's research highlighted links between CCP-connected Philippine Offshore Gaming Operators (POGOs), the pro-Duterte Maisug rally movement, and CCP united front groups.

The Maisug rallies, held globally across Filipino diasporas, seek to mobilize support for the Duterte family. Their funding sources remain opaque. Alleged intelligence community sources have linked POGO funding to these rallies through networks connected to Michael Yang, a former economic advisor to Duterte and member of various CCP united front work groups.

This pattern of criminal enterprises linked to CCP influence operations is not unique to the Philippines. Similar dynamics have been documented in Cambodia (Prince Holding Group) and Myanmar (Kokang criminal families maintaining close relations with Yunnan province officials). The convergence of organized crime, united front work, and information operations represents a broader CCP strategy across Southeast Asia.

Five Content Red Flags

These behavioral patterns, documented extensively by Filipino journalists and researchers, help distinguish coordinated pro-China messaging from organic opinion.

1. Echoing Beijing's territorial claims. The most direct indicator is repetition of official Chinese positions, particularly defenses of the nine-dash line, which was invalidated by the 2016 international arbitral tribunal ruling. Content that presents already-rebutted arguments without acknowledging the ruling often traces back to coordinated networks.

2. Asymmetric criticism. Legitimate criticism of U.S. foreign policy is not propaganda. The United States colonized the Philippines for nearly half a century. But coordinated networks exhibit a telling asymmetry: aggressive condemnation of American "imperialism" combined with conspicuous silence about China building artificial islands, ramming Coast Guard vessels, deploying water cannons against Filipino sailors, and driving fishermen from traditional grounds. This asymmetry is a behavioral fingerprint.

3. Contradictory threat framing. Coordinated influencers simultaneously minimize China's aggression and warn that any resistance means "nuclear annihilation." The goal is not consistency. It is paralysis. A 2024 study by political economist Alvin Camba documented how influencers cherry-pick outdated academic arguments to downplay Chinese aggression while fearmongering about the consequences of pushback.

4. Weaponizing Sinophobia accusations. Conflating criticism of the Chinese government with prejudice against Chinese people is deliberate deflection. Criticizing water cannon attacks on Filipino sailors is not racism. The Filipino-Chinese community is integral to Philippine society, and many Filipino-Chinese citizens oppose China's actions in the West Philippine Sea.

5. Targeting military credibility. Undermining trust in the Philippine military serves a strategic purpose: if Filipinos do not trust their armed forces, they are less likely to support defense of territorial claims. Influencers systematically target officers who have been most effective at exposing Chinese aggression, particularly those who pioneered the "assertive transparency" approach of documenting incidents with photos and video. As military historian Jose Custodio noted, China "plays the long game." Disinformation is a sustained, long-term campaign.

Part 2: The Domestic Influence Machine

The Cyborg Model

Domestic political influence operations deploy what researchers call a "cyborg" model: networks combining hyper-partisan influencers, coordinated troll accounts, genuine supporters, and AI-generated content into a single amplification machine.

These operations utilize generative AI to mass-produce deepfakes of everything from fictional supporters to fabricated man-on-the-street interviews. Keyboard warriors flood comment sections with hundreds of identical talking points in minutes. Networks of pseudo-analytical Facebook pages produce lengthy essays disguised as independent commentary.

Five Tactical Patterns

Research by Filipino journalists has identified five recurring tactics in coordinated domestic influence campaigns:

1. Dismissal and denial. Networks flatly deny documented facts. When government auditors flag financial anomalies, coordinated accounts flood comment sections with alternative explanations, often verbatim repetitions of the same talking points. Hundreds of comments using identical specific words and phrases within hours of a news event is a clear coordination signal.

2. Distortion. Rather than outright denial, distortion twists legitimate information into misleading conclusions. Networks misrepresent audit findings, selectively quote legal decisions, or present partial information as complete analysis. Hyper-partisan pages publish lengthy "analysis" that consistently omits critical context. The omissions are too systematic to be accidental.

3. Deflection. When faced with specific allegations, coordinated networks redirect attention to other targets. The technique avoids addressing substance while creating false equivalence. Sudden, synchronized topic shifts across multiple accounts and pages, particularly within hours of unfavorable news, reveal the coordination.

4. Emotional manipulation. Casting political figures as persecuted victims transforms constitutional processes into narratives of personal suffering. Coordinated networks simultaneously deploy specific emotional language ("betrayal," "persecution," "abandoned") across supposedly independent accounts. The uniformity reveals shared messaging guidance.

5. Conspiracy amplification. The most damaging tactic links domestic political processes to foreign conspiracies, often importing "deep state" narratives from American far-right discourse. Copypasta campaigns deploy identical pre-packaged narratives across hundreds of accounts, many of them low-quality profiles with minimal friends and little prior activity, within hours of triggering events. The speed and uniformity indicates pre-positioned messaging activated on cue.

Part 3: Technical Detection Methods

Content analysis identifies what is being said. OSINT identifies who is saying it, how they are connected, and where the infrastructure leads.

Account Clustering

Coordinated networks leave fingerprints in account creation patterns:

  • Registration timing: Accounts created in clusters within narrow windows suggest bulk creation
  • Naming conventions: Sequential usernames or auto-generated patterns indicate batch operations
  • Profile characteristics: Low friend counts, recent creation dates, minimal posting history, and generic profile photos. These account-level signals compound when they appear across hundreds of accounts posting identical content
  • Activity windows: As the Spamouflage case demonstrated, accounts operating exclusively during specific timezone business hours reveal their operators' location

Cross-Platform Correlation

Influence campaigns never operate on a single platform. Using username, email, or phone number correlation, OSINT tools can map the same operators working across Facebook, YouTube, TikTok, X, and Telegram. When the same identity surfaces on multiple platforms pushing identical narratives with synchronized timing, coordination is evident.

Domain and Infrastructure Analysis

Pseudo-news websites provide credibility for manufactured narratives. OSINT reveals their infrastructure:

  • WHOIS records: Domains registered in bulk through the same registrar
  • DNS analysis: Multiple "independent" sites resolving to the same hosting
  • Certificate transparency logs: CT data revealing bulk infrastructure deployment
  • Technology fingerprinting: Shared CMS, analytics codes, or ad configurations connecting seemingly unrelated sites
  • Article seeding: As ASPI documented, coordinated networks now plant articles by unknown freelancers in legitimate outlets before amplifying them on social media

Comment Flooding Detection

Comment flooding is highly visible and highly detectable:

  • Temporal clustering: Plotting timestamps reveals coordinated bursts, such as hundreds of comments within 30-minute windows
  • Text similarity: Organic discussion produces diverse language. Coordinated flooding produces near-identical phrasing
  • Copypasta identification: Exact string matching across accounts reveals the scale of synchronized posting

AI Content Detection

Generative AI has lowered the barrier for producing fake content at scale:

  • Metadata analysis: AI-generated videos often lack genuine recording signatures
  • Cross-platform tracking: Tracing where AI content migrates from tagged pages to untagged reshares
  • Visual artifacts: Current AI tools leave detectable artifacts in faces, hands, text, and backgrounds

Network Mapping

The most powerful technique is mapping relationships:

  • Page admin overlap: Multiple "independent" pages sharing administrators
  • Amplification patterns: The same accounts consistently promoting content from specific pages
  • Content velocity: Unnaturally fast propagation from origin to amplification accounts
  • Copypasta activation: Pre-written content deployed across hundreds of accounts within hours of a triggering event, with timing that indicates coordinated activation
  • Financial linkages: Tracing connections between influence networks and funding sources such as POGOs or united front organizations

Part 4: From an Operator's Desk

Theory is useful. Workflow is what matters. Here is how an OSINT.PH operator would investigate a suspected coordinated influence network from start to finish.

Step 1: Identify the Starting Point

An investigation begins with a single lead: a Facebook page posting suspiciously uniform political content, a YouTube channel producing high-volume narratives with low organic engagement, or a cluster of accounts flooding a news post's comment section with identical talking points.

The operator takes note of the page name, any visible admin profiles, the email or contact info listed, and the domain of any linked website.

Step 2: Search Across Platforms

The operator enters the page name, admin username, or associated email into OSINT.PH and runs a multi-platform search. The engine queries 20+ platforms simultaneously, returning any matching accounts on social media, messaging apps, developer tools, forums, and more.

This is where coordination becomes visible. A username tied to a political Facebook page also appears on Telegram, X, and a GitHub account. An email listed on a pseudo-news website is registered on multiple social media platforms under different display names. These cross-platform links are difficult for operators of influence networks to completely separate.

Step 3: Investigate the Infrastructure

If the suspected network operates pseudo-news websites, the operator runs domain and IP lookups. OSINT.PH pulls WHOIS registration data, DNS records, SSL certificate history from CT logs, and technology fingerprints.

Key questions at this stage:

  • Were multiple domains registered through the same registrar on similar dates?
  • Do the sites share hosting infrastructure or IP addresses?
  • Are the same analytics tracking codes or ad network IDs present across "independent" sites?
  • When were SSL certificates issued, and do the issuance dates cluster together?

Each shared data point strengthens the case that supposedly independent outlets are operated by the same entity.

Step 4: Map the Network

The operator opens an investigation board and begins building the graph. Nodes represent entities: accounts, pages, domains, email addresses, phone numbers. Edges represent connections: "same email," "shared admin," "registered same day," "identical content posted within minutes."

As the graph grows, the structure of the network becomes visible. A cluster of Facebook pages sharing two admin accounts. Those admins linked to an email that registered three pseudo-news domains. Those domains hosted on the same server as a fourth site that was previously flagged by Meta.

The investigation board is end-to-end encrypted. Only the operator holds the key.

Step 5: Document and Export

The operator captures the findings:

  • Screenshot key posts, comments, and account profiles before they are deleted or made private
  • Record timestamps of coordinated activity (comment floods, copypasta deployment, synchronized posts)
  • Note account creation dates and any patterns in naming, profile photos, or activity windows
  • Export the investigation board as a PNG for inclusion in reports
  • Generate CSV or PDF exports of search results for case documentation

Step 6: Report

The documented evidence can be submitted to:

  • Platform trust and safety teams for review and potential takedown
  • Government agencies responsible for election integrity or national security
  • Journalists and researchers who track influence operations
  • International partners and multilateral institutions

The strength of the report depends on the pattern, not individual data points. A single suspicious account is an observation. A mapped network of 50 accounts with shared infrastructure, synchronized posting, and coordinated messaging is evidence of an operation.

Why Detection Matters Now

These operations represent sustained, strategic campaigns, not impulsive reactions. The infrastructure for influence operations is built and maintained well before activation. Networks documented today will be the amplification engines deployed during the 2028 election season.

The Philippines has a vibrant, engaged online public. That engagement is both a strength and a vulnerability. Detection is the first line of defense.

For investigators: map the infrastructure before campaigns peak. Document systematically. Screenshots and archives of coordinated behavior provide evidence that platforms can act on. Follow patterns, not individual posts. Any single post can be dismissed as opinion. Hundreds of accounts posting identical content within an hour is a pattern that demands investigation.

As Custodio noted: "The Philippines, throughout history, has always produced traitors. They are a dime a dozen. We will always be fertile ground for China operations. That is why we need vigilance."


References: This analysis draws on reporting by PCIJ (Regine Cabato, Gian Libot), Reuters, Philstar.com, AidData, the Australian Strategic Policy Institute (Albert Zhang), and academic research by Alvin Camba. Detection methodologies are based on established OSINT practices for identifying coordinated inauthentic behavior.