Mapping and Disrupting Modern Influence Networks

Nicholas Van Landschoot
Nicholas Van Landschoot
November 24, 2025
10
Mapping and Disrupting Modern Influence Networks
Diagonal Lines

When I first discovered social media intelligence (SOCMINT), I naturally gravitated towards techniques involving the analysis of individual accounts, deanonymization, and surfacing historical content tied to a particular identity. This is still indeed one of the most interesting spaces in open-source intelligence (OSINT) to work in due to the endless vault of novel techniques that can be used to map out an identity.

While targeted analysis is both near and dear to my heart and incredibly important, today I want to write about a different kind of SOCMINT: network-wide analysis that spans over a massive number of accounts that are often hosted on entirely different platforms. 

Understanding the Dynamics of Influence Networks

These kinds of investigations often aim to understand foreign disinformation networks, Chinese spamouflage targeting Western elections, Iran-linked influence networks, or massive bot farms tied to crypto scams. A far cry from the image of a detective staring at an evidence board trying to get to the bottom of a homicide or fraud case, online informational attacks now represent one of if not the most actively exploited attack surfaces in the entire world. 

Over the past year building Intrace, I have had the opportunity to learn about these networks from every angle. Naturally, it makes sense to start with the threat actors. What are their motivations and incentives? What do they stand to gain? What is the profile of someone who typically exploits social networks to spread a message.

When I analyze the actors behind campaigns, a pattern that shows up time and time again is that they are rarely driven by ideology alone if at all. To be clear there are grassroots communities that behave like influence networks as a sort of natural byproduct but they tend to act with less intention. Instead, most are driven by a blend of financial incentive, political direction, and opportunism. Some are innocuous, such as social media contractors looking to hit engagement quotas, while others are state-linked teams focused on steering specific narratives, and plenty fall somewhere in between. 

Creating the perfect disinformation campaign may be less about crafting a perfect lie and more about shaping the environment around the lie so that no one thinks to question it. It is volume, persistence, and timing that exploits a fundamental flaw in human thinking: we tend to see patterns everywhere. Once you see how these systems function at scale, the individual accounts feel less like characters in a story and more like replaceable parts in a machine built just to spam users.

Disrupting Influence Networks is Hard

The fact that accounts are largely interchangeable clearly presents a major problem. The best analysts in the world can map out a network of accounts pushing a crypto scam onto the elderly. Maybe they can even get these accounts shut down if they are engaging in explicitly criminal behavior. The brutal truth is that it often doesn’t matter because followers are cheap, and if a scam has a good ROI, it will be operating again by the next morning if not the next hour.

This raises the question: why should any analyst go to the trouble of mapping out these digital networks in the first place? The answer is this: even if you can’t stop the churn, understanding the structure of the system gives you a chance to see how the operation actually breathes, where it’s fragile, and where a well-placed intervention can stall the entire engine long enough to matter.

Disrupting Networks with Seed Accounts

One practical example is when a network depends on a few core “seed accounts” that feed content to hundreds of downstream bot accounts. If those seeds are identified and removed, the downstream chatter collapses for a while because the bots have nothing fresh to push.

This only tends to matter in short-lived or high-pressure moments where timing is important, such as elections, viral scandals, or sudden policy fights. Think elections, viral scandals, sudden policy fights. News cycles move faster every year as attention spans continue to fall, so if a network misses the window, it loses its chance to go viral and shape popular opinion. It matters much less in slow-rolling campaigns where actors have plenty of backups and can rebuild quickly, wasting resources faster than the adversary. Here you will waste resources faster than the adversary. 

Disrupting Networks dependent on Amplification Cycles

Another example can be seen in coordinated amplification cycles. Some networks rely on tight timing to get a post trending before platforms can react. If you understand that rhythm, disrupting the first wave can kill the entire campaign before it gains traction. Without administrator access to moderate platforms, the options to disrupt an operation like this are narrower, but it is still possible.

Noise injection works well because it clutters the algorithmic surface the actors are trying to steer. If you get enough unrelated posts, commentary, or counter content into the same tag or topic early, their first wave cannot gain traction. As a disclaimer, it is essential to approach operations like these in an ethical manner.

The best use case here is for when the operation depends heavily on reach rather than substance, which applies to the majority of influence operations, from trend hijacking to fake outrage storms. Breaking the first wave often kills the whole plan, but it matters less when the actors don’t care about trending or popular topics and are attempting to infiltrate a niche area over time.

Disrupting Cross Platform Networks

A third case involves cross-platform bridges, which can present a major challenge to analysts as they try to map activity between platforms. However, many of these operations lean on a single forum, group, or chat channel as their staging ground. Pulling visibility from that hub forces the actors to rebuild their workflow and slows the spread. I will not dive into all of the potential methods to disrupt a channel like this here as that deserves its own post; however, as a quick note, simple crowding and dilution will work wonders here. 

This has an outsized impact when the bridge is a bottleneck, for example, if a relatively small admin team is coordinating large numbers of low-skill amplifiers. If the bottleneck breaks, the whole system loses coordination. It matters less in decentralized ecosystems where planning happens across many parallel groups. LLMs and artificial intelligence also present a new challenge here as bottlenecks can be overcome with artificial posts.

Disrupting Networks with Figureheads

There are also reputation-based networks where a handful of accounts act as validators to make everything else look real. This is the one case where the fight is less about pure attention and more about perception, as exposing those validators publicly can break the illusion and make the rest of the network less convincing.

This matters when the audience relies on a figurehead to trust the message, such as in fringe political communities, niche financial groups, or conspiracy circles. Once the validators lose credibility, the rest of the network becomes noise. It matters less when the campaign is already operating in chaotic or anonymous environments where no one expects credibility in the first place, and replacement figures can often fill the vacuum quickly.

Disrupting Large Influence Networks

Regardless of the type of attack, a super common weakness is operational laziness. Actors reuse images, phrasing, and infrastructure across campaigns. Even if the phrasing is not exact, modern machine learning can detect similarities, and highlighting those fingerprints can help platforms detect and remove entire batches instead of picking off posts one at a time. 

This tends to be most effective when the actors rely on scale to perpetuate illegal enterprises, such as mass spam, scam waves, or foreign political influence runs. The more they recycle their own material, the easier it is to wipe out large clusters. It matters far less when the operation is well-resourced and rotates assets quickly enough that no pattern sticks around long enough to exploit. This forces threat actors to trade scale for reliability, and scale is normally the number one thing that makes or breaks any information campaign. 

Pro tip: assuming you do not have access to directly moderate the social media platform in question, you can create infrastructure to automatically report accounts (this may go against ToS so be careful) or you can run a contextualization campaign.

Disrupting Echo Chambers

Speaking of contextualization, when the threat actor hasn’t committed a crime but is pushing a narrative, this approach is often the best option. Flagging and framing posts can blunt the impact without removing them outright. X.com’s Community Notes is a good example, relying on crowdsourced corrections and clarifications to democratize fact checking. It’s promising but not without tradeoffs, since any open system can become an attack surface. YouTube's method of adding links to Wikipedia is another version of this idea, though it comes with heavier moderation on Wikipedia’s end and rather than users directly adding context the YouTube algorithm detects when a post might be sensitive.

In our early research, the approach taken by X seems to land better. It’s more community-driven and tailored to each post, with notes that match the actual content. YouTube's method often raises more concerns about censorship or bias, either because it’s less community-driven or because the context is generic and doesn’t change per post. Still, adding context, calling out suspicious activity, or attaching background to repeated claims can shift how the content is received. 

These softer interventions work best against low-effort campaigns where actors rely on users taking everything at face value and may not work well if the issue is highly contentious or well known.

Contextualization does not require platform ownership and ultimately the best way to shape the narrative is to get more engagement than your adversaries, so taking an action such as redirecting real posters towards an echo chamber may be the most effective when possible.

At Intrace we build systems for cross-domain intelligence, from influence mapping to broader network analysis across the open web. If you're facing problems anywhere in this landscape or need deeper visibility into how these operations work, reach out and we’d love to take a look together.

Mapping and Disrupting Modern Influence Networks