Digital threat modelling for partner abuse

An overview of how digital threat modelling works in the context of technology-facilitated abuse.

How this threat model works (and why it matters)

Let’s say you’re being followed—but not in the dark alley sort of way. More like your phone always knows where you are, your messages seem strangely public, and your ex-partner is suspiciously well-informed about things you never told them.

Welcome to the digital side of abuse, where everyday technology becomes a toolkit for control.

Threat modelling, in this context, is just a fancy way of saying: “Let’s figure out what could go wrong, who might cause it, how they’d do it, and what the consequences could be—before it happens or gets worse.”

It’s like doing a home safety check, but for your digital life.

Why survivors need a different kind of model

Most tech security models are built for businesses: firewalls, hackers, boardroom panic.
This isn’t that.

Here, the threat isn’t some mysterious figure in a hoodie—it’s someone you might share a bed, a child, or a Netflix account with. They may not be “hacking” in the Hollywood sense, but they often have:

  • Access (to your devices, accounts, or passwords)
  • Knowledge (of your habits, routines, emotional triggers)
  • Motivation (to monitor, control, or harm you)

Which changes the rules entirely.

Why this model helps

Because recognising the pattern is the first step in breaking it.

This isn’t about blaming anyone for being targeted. Quite the opposite. Most tech is designed to be open, convenient, and—unfortunately—abuser-friendly. But once you can name the problem, you can start building strategies around it.

It also helps support workers, legal professionals, and anyone else involved to understand that:

  • This isn’t “just a tech issue”
  • It’s not “paranoia”
  • And no, “just block them” isn’t a solution

We use this model to understand, not to diagnose. Everyone’s situation is different, and there’s no one-size-fits-all solution. But thinking through what’s at risk, who might exploit it, and how they might do it? That’s a solid step toward reclaiming control.

How the model works

We break things down into simple, human-friendly categories:


Who’s causing the harm or doing the surveillance?

Spoiler: it’s not always just the ex. There are also enablers (apps, third parties) and digital opportunists who feed off your compromised data.

What’s worth protecting?

Think: your devices, your messages, your money, your location—even your identity. If it can be seen, stolen, or sabotaged, it goes here.

How do they get in?

Through physical access (“I just needed to check your texts”), sneaky features like cloud syncing, or techy tricks like spying on your Wi-Fi. Some methods are digital. Others are emotional.

What do these look like in real life?

From stalkerware to impersonation, gaslighting via smart devices to using your own photos against you—this is the practical bit, where theory meets reality.

What kind of harm can this cause?

Monitoring. Isolation. Reputation damage. Financial sabotage. It’s not just a phone being hacked—it’s your independence being chipped away.

And what can it do to you?

Long-term effects like anxiety, safety risks, legal issues, and loss of trust in technology (or people). It’s not paranoia if they are watching you—especially if they bought the spyware off Amazon