From Beijing-trained bloggers to AI-powered cyborg networks: how OSINT techniques can identify, map, and document the disinformation campaigns targeting Filipino public opinion.
Philippine social media faces coordinated influence operations on two fronts. The first is foreign: China actively works to shape Filipino public opinion on the West Philippine Sea dispute through trained influencers, fake news pages, and proxy firms. The second is domestic: political influence machines deploy cyborg networks, part human and part machine, to manipulate public perception around elections, impeachment proceedings, and accountability.
These operations are not speculation. In 2020, Facebook took down 155 accounts with 130,000 followers, confirming they originated from China. In March 2025, several pro-Duterte vloggers admitted in a congressional hearing to attending a China-funded training seminar in Beijing. A Reuters investigation revealed the Chinese Embassy hired a Manila-based firm to boost messaging using fake accounts. Researchers have also documented AI-powered networks mass-producing deepfakes, flooding comment sections, and deploying pre-written copypasta campaigns within hours of political events.
The question for investigators is how to detect, document, and map these operations systematically.
China's approach has evolved. Early attempts at direct messaging, like a Chinese Embassy music video during the pandemic, generated backlash. The strategy shifted to using Filipino voices as intermediaries, making propaganda appear organic.
A 2024 AidData report documented how Beijing uses intermediaries to shape narratives "without direct attribution." The model works through layers: state-funded training programs produce sympathetic influencers, who produce content amplified by networks of fake and real accounts, supported by pseudo-news websites that provide a veneer of journalistic credibility.
In July 2024, hours before President Marcos delivered his State of the Nation Address, a deepfake video depicting him using illicit drugs was circulated by Duterte supporters. The video was debunked by Philippine authorities, but not before spreading rapidly across social media.
Research by the Australian Strategic Policy Institute (ASPI) identified at least 80 inauthentic accounts on X and 11 YouTube videos amplifying the deepfake. ASPI assessed them as very likely linked to Spamouflage, a covert social media network operated by China's Ministry of Public Security.
The accounts left clear digital fingerprints:
ASPI also identified a novel tactic: the Spamouflage network was seeding articles by unknown freelancers in legitimate Taiwanese and Hong Kong news outlets, then amplifying those articles across social media. This gave CCP narratives the appearance of independent journalism.
The deepfake itself was first shared by a Filipino-American vlogger at a pro-Duterte rally in Los Angeles. Whether the CCP directly supported the video's creation or simply amplified it opportunistically, the coordination between domestic actors and Chinese state-linked accounts demonstrates the layered nature of these operations.
The influence ecosystem extends beyond social media. ASPI's research highlighted links between CCP-connected Philippine Offshore Gaming Operators (POGOs), the pro-Duterte Maisug rally movement, and CCP united front groups.
The Maisug rallies, held globally across Filipino diasporas, seek to mobilize support for the Duterte family. Their funding sources remain opaque. Alleged intelligence community sources have linked POGO funding to these rallies through networks connected to Michael Yang, a former economic advisor to Duterte and member of various CCP united front work groups.
This pattern of criminal enterprises linked to CCP influence operations is not unique to the Philippines. Similar dynamics have been documented in Cambodia (Prince Holding Group) and Myanmar (Kokang criminal families maintaining close relations with Yunnan province officials). The convergence of organized crime, united front work, and information operations represents a broader CCP strategy across Southeast Asia.
These behavioral patterns, documented extensively by Filipino journalists and researchers, help distinguish coordinated pro-China messaging from organic opinion.
1. Echoing Beijing's territorial claims. The most direct indicator is repetition of official Chinese positions, particularly defenses of the nine-dash line, which was invalidated by the 2016 international arbitral tribunal ruling. Content that presents already-rebutted arguments without acknowledging the ruling often traces back to coordinated networks.
2. Asymmetric criticism. Legitimate criticism of U.S. foreign policy is not propaganda. The United States colonized the Philippines for nearly half a century. But coordinated networks exhibit a telling asymmetry: aggressive condemnation of American "imperialism" combined with conspicuous silence about China building artificial islands, ramming Coast Guard vessels, deploying water cannons against Filipino sailors, and driving fishermen from traditional grounds. This asymmetry is a behavioral fingerprint.
3. Contradictory threat framing. Coordinated influencers simultaneously minimize China's aggression and warn that any resistance means "nuclear annihilation." The goal is not consistency. It is paralysis. A 2024 study by political economist Alvin Camba documented how influencers cherry-pick outdated academic arguments to downplay Chinese aggression while fearmongering about the consequences of pushback.
4. Weaponizing Sinophobia accusations. Conflating criticism of the Chinese government with prejudice against Chinese people is deliberate deflection. Criticizing water cannon attacks on Filipino sailors is not racism. The Filipino-Chinese community is integral to Philippine society, and many Filipino-Chinese citizens oppose China's actions in the West Philippine Sea.
5. Targeting military credibility. Undermining trust in the Philippine military serves a strategic purpose: if Filipinos do not trust their armed forces, they are less likely to support defense of territorial claims. Influencers systematically target officers who have been most effective at exposing Chinese aggression, particularly those who pioneered the "assertive transparency" approach of documenting incidents with photos and video. As military historian Jose Custodio noted, China "plays the long game." Disinformation is a sustained, long-term campaign.
Domestic political influence operations deploy what researchers call a "cyborg" model: networks combining hyper-partisan influencers, coordinated troll accounts, genuine supporters, and AI-generated content into a single amplification machine.
These operations utilize generative AI to mass-produce deepfakes of everything from fictional supporters to fabricated man-on-the-street interviews. Keyboard warriors flood comment sections with hundreds of identical talking points in minutes. Networks of pseudo-analytical Facebook pages produce lengthy essays disguised as independent commentary.
Research by Filipino journalists has identified five recurring tactics in coordinated domestic influence campaigns:
1. Dismissal and denial. Networks flatly deny documented facts. When government auditors flag financial anomalies, coordinated accounts flood comment sections with alternative explanations, often verbatim repetitions of the same talking points. Hundreds of comments using identical specific words and phrases within hours of a news event is a clear coordination signal.
2. Distortion. Rather than outright denial, distortion twists legitimate information into misleading conclusions. Networks misrepresent audit findings, selectively quote legal decisions, or present partial information as complete analysis. Hyper-partisan pages publish lengthy "analysis" that consistently omits critical context. The omissions are too systematic to be accidental.
3. Deflection. When faced with specific allegations, coordinated networks redirect attention to other targets. The technique avoids addressing substance while creating false equivalence. Sudden, synchronized topic shifts across multiple accounts and pages, particularly within hours of unfavorable news, reveal the coordination.
4. Emotional manipulation. Casting political figures as persecuted victims transforms constitutional processes into narratives of personal suffering. Coordinated networks simultaneously deploy specific emotional language ("betrayal," "persecution," "abandoned") across supposedly independent accounts. The uniformity reveals shared messaging guidance.
5. Conspiracy amplification. The most damaging tactic links domestic political processes to foreign conspiracies, often importing "deep state" narratives from American far-right discourse. Copypasta campaigns deploy identical pre-packaged narratives across hundreds of accounts, many of them low-quality profiles with minimal friends and little prior activity, within hours of triggering events. The speed and uniformity indicates pre-positioned messaging activated on cue.
Content analysis identifies what is being said. OSINT identifies who is saying it, how they are connected, and where the infrastructure leads.
Coordinated networks leave fingerprints in account creation patterns:
Influence campaigns never operate on a single platform. Using username, email, or phone number correlation, OSINT tools can map the same operators working across Facebook, YouTube, TikTok, X, and Telegram. When the same identity surfaces on multiple platforms pushing identical narratives with synchronized timing, coordination is evident.
Pseudo-news websites provide credibility for manufactured narratives. OSINT reveals their infrastructure:
Comment flooding is highly visible and highly detectable:
Generative AI has lowered the barrier for producing fake content at scale:
The most powerful technique is mapping relationships:
Theory is useful. Workflow is what matters. Here is how an OSINT.PH operator would investigate a suspected coordinated influence network from start to finish.
An investigation begins with a single lead: a Facebook page posting suspiciously uniform political content, a YouTube channel producing high-volume narratives with low organic engagement, or a cluster of accounts flooding a news post's comment section with identical talking points.
The operator takes note of the page name, any visible admin profiles, the email or contact info listed, and the domain of any linked website.
The operator enters the page name, admin username, or associated email into OSINT.PH and runs a multi-platform search. The engine queries 20+ platforms simultaneously, returning any matching accounts on social media, messaging apps, developer tools, forums, and more.
This is where coordination becomes visible. A username tied to a political Facebook page also appears on Telegram, X, and a GitHub account. An email listed on a pseudo-news website is registered on multiple social media platforms under different display names. These cross-platform links are difficult for operators of influence networks to completely separate.
If the suspected network operates pseudo-news websites, the operator runs domain and IP lookups. OSINT.PH pulls WHOIS registration data, DNS records, SSL certificate history from CT logs, and technology fingerprints.
Key questions at this stage:
Each shared data point strengthens the case that supposedly independent outlets are operated by the same entity.
The operator opens an investigation board and begins building the graph. Nodes represent entities: accounts, pages, domains, email addresses, phone numbers. Edges represent connections: "same email," "shared admin," "registered same day," "identical content posted within minutes."
As the graph grows, the structure of the network becomes visible. A cluster of Facebook pages sharing two admin accounts. Those admins linked to an email that registered three pseudo-news domains. Those domains hosted on the same server as a fourth site that was previously flagged by Meta.
The investigation board is end-to-end encrypted. Only the operator holds the key.
The operator captures the findings:
The documented evidence can be submitted to:
The strength of the report depends on the pattern, not individual data points. A single suspicious account is an observation. A mapped network of 50 accounts with shared infrastructure, synchronized posting, and coordinated messaging is evidence of an operation.
These operations represent sustained, strategic campaigns, not impulsive reactions. The infrastructure for influence operations is built and maintained well before activation. Networks documented today will be the amplification engines deployed during the 2028 election season.
The Philippines has a vibrant, engaged online public. That engagement is both a strength and a vulnerability. Detection is the first line of defense.
For investigators: map the infrastructure before campaigns peak. Document systematically. Screenshots and archives of coordinated behavior provide evidence that platforms can act on. Follow patterns, not individual posts. Any single post can be dismissed as opinion. Hundreds of accounts posting identical content within an hour is a pattern that demands investigation.
As Custodio noted: "The Philippines, throughout history, has always produced traitors. They are a dime a dozen. We will always be fertile ground for China operations. That is why we need vigilance."
References: This analysis draws on reporting by PCIJ (Regine Cabato, Gian Libot), Reuters, Philstar.com, AidData, the Australian Strategic Policy Institute (Albert Zhang), and academic research by Alvin Camba. Detection methodologies are based on established OSINT practices for identifying coordinated inauthentic behavior.