The Human HARMS: A different approach to threat modelling
By Kieron Ivy Turk, Anna Talas, Prof. Alice Hutchings
When talking about the importance of cybersecurity, we often imagine hackers breaking into high security systems to steal data, money or launch large-scale attacks. However, technology can also be used for harm in everyday situations. Traditional cybersecurity models tend to focus on protecting systems from highly skilled external threats. While these models are effective in cybersecurity, they do not adequately address interpersonal threats that often do not require a lot of technical skill—such as those found in cases of domestic abuse.
The HARMS model (Harassment, Access and infiltration, Restrictions, Manipulation
and tampering, and Surveillance) is a new threat modelling framework designed to identify non-technical and human factors harms that are often missed by popular frameworks such as STRIDE. We focused on how everyday technology, such as IoT devices, can be exploited to distress, control or intimidate others.
The five elements of this model are:
1. Harassment – Technology can be used to send harmful messages, play loud sounds to disturb victims, or repeatedly contact someone against their will.
2. Access and Infiltration – Users may gain access to devices by learning existing user passwords, coercing victims into sharing access, or exploiting shared accounts to gain control.
3. Restrictions – Limiting access to technology, such as locking someone out of an account or preventing them from using certain features, can be a form of coercive control.
4. Manipulation and Tampering – Abusers can change settings, delete important information, or use technology to create fake accusations against a victim.
5. Surveillance – Smart devices with cameras, microphones, or location tracking can be misused to monitor and stalk victims without their knowledge.
The threat model can be used to consider how a device or application can be used maliciously to identify ways it can be re-designed to make it more difficult to commit these harm. Imagine, for example, a smart speaker in a shared home. This could be used maliciously by an abusive individual to send distressing messages to be read aloud or set alarms to go off in the middle of the night. Equally, if the smart speaker is connected to calendars, they could change or remove scheduled events to make users miss meetings and appointments. Furthermore, connected devices can be controlled remotely or automatically through routines, causing changes that the user does not understand and making them doubt their memory or even their sanity. They could also monitor conversations through built-in microphones or keep track of the commands others have used on the device through logs.
As smart home technology, connected devices, and online platforms continue to evolve, it is crucial that we think beyond just technical security. Our HARMS model highlights how technology, even when working as intended, can be used to control and harm individuals. By also incorporating human-centered threat modelling into designing software development, in addition to traditional threat modelling methods, we can build safer systems that help prevent them being used for abuse.
Paper: Turk, K. I., Talas, A., & Hutchings, A. (2025). Threat Me Right: A Human HARMS Threat Model for Technical Systems. arXiv preprint arXiv:2502.07116. (https://arxiv.org/abs/2502.07116)
Comments
Post a Comment