top of page

Home > Publications > "Stalked by Code: Navigating the Psychological and Legal Landscapes of Online Harassment"

April 10th 2025

Stalked by Code: Navigating the Psychological and Legal Landscapes of Online Harassment

By Martine Mussies

martine mussies 2025 - Martine Mussies.jpg

Martine Mussies is an artistic researcher and autistic academic based in Utrecht, the Netherlands. She is a PhD candidate at the Centre for Gender and Diversity at Maastricht University, where she is writing her dissertation on The Cyborg Mermaid. Martine is also part of SCANNER, a research consortium aimed at closing the knowledge gap on sex differences in autistic traits. In her #KingAlfred project, she explores the online afterlives of King Alfred the Great, and she is currently working to establish a Centre for Asia Studies in her hometown of Utrecht. Beyond academia, Martine is a musician, budoka, and visual artist. Her interdisciplinary interests include Asia Studies, autism, cyborgs, fan art and fanfiction, gaming, medievalisms, mermaids, music(ology), neuropsychology, karate, King Alfred, and science fiction. More at: www.martinemussies.nl and LinkedIn.

Image by Warren

Abstract: Cyberstalking is a pervasive form of digital violence that exploits technological infrastructures to inflict psychological harm. This article examines the phenomenon through an interdisciplinary lens, combining neuropsychology, legal analysis, and autiethnographic narrative to explore the lived realities of victims—particularly neurodivergent individuals. It analyses the cognitive and emotional impact of prolonged digital harassment, highlighting the role of the hypothalamic-pituitary-adrenal axis and limbic system in chronic stress responses. Additionally, the article addresses the legal complexities of defining and prosecuting cyberstalking, especially in contexts of digital anonymity and evidentiary gaps. Finally, the dual role of artificial intelligence is explored, both as a potential weapon in the hands of stalkers and as a protective mechanism capable of early detection and personalised support for victims. In doing so, the paper calls for nuanced, ethical, and interdisciplinary responses to a rapidly evolving threat landscape.

Keywords: Cyberstalking, digital violence, neuropsychology, autiethnography, artificial intelligence, trauma, legislation, autism, gender, surveillance, social cognition, coping mechanisms

 

Introduction: Understanding Cyberstalking in the Age of AI

 

In our increasingly interconnected world, digital communication has become an integral part of everyday life (Graham & Dutter 2019, Chayko 2014). Yet, as our social lives expand into virtual spaces, so too do the risks we encounter (Waldman, 2018). One particularly insidious form of digital harm is cyberstalking—a pattern of abusive or threatening behaviour conducted via electronic means, such as social media platforms, messaging apps, email, or other online communication channels (Sheridan & Grant, 2007). Though often invisible to the public eye, cyberstalking constitutes a serious form of psychological violence (Siemieniecka & Skibińska 2019) with far-reaching consequences for its victims (Short et al. 2015).

 

The urgency of addressing cyberstalking has grown alongside our deepening reliance on digital platforms (Kaur et al 2021). Over two decades ago, Spitzberg & Hoobler (2002) already noted that as technology evolves, so too do the tools available to those who seek to manipulate, intimidate, or harass others online. In recent decades, the rise of social media lowered the threshold for stalkers (Dreßing, 2014). And in the last few years, the emergence and availability of generative artificial intelligence, including technologies such as deepfakes (see for example Hancock & Bailenson, 2021), has opened new and alarming pathways for abuse. A fabricated picture or video clip, convincingly portraying someone in a compromising situation, can be used for blackmail, reputational sabotage, or psychological torment. These developments not only challenge existing legal frameworks but also demand a more nuanced understanding of how online violence affects the human brain and behaviour.

This article aims to explore the neuropsychological impact of cyberstalking, examining how persistent digital harassment alters victims’ cognitive and emotional states. Furthermore, it addresses the legal implications of these evolving threats, considering how legislation is adapting—or failing to adapt—to the changing technological landscape. Finally, it investigates the dual role of artificial intelligence: on one hand, as a tool that may enable new forms of cyberstalking, and on the other, as a potential ally in detecting, preventing, and mitigating online abuse.

Cyberstalking: A New Form of Violence in the Digital Age

While traditional stalking is often rooted in physical proximity—such as following someone home or showing up uninvited—cyberstalking thrives in the invisible architecture of the internet and/or someone’s personal devices (Ogilvie, 2000). It is not simply an online extension of offline behaviour, but a distinct form of violence, shaped by the unique affordances of digital technologies: anonymity, scalability, permanence, and reach. Its digital shadows infiltrate the spaces we use to connect, work, and express ourselves, often without clear boundaries or warning signs.

One prevalent form of cyberstalking is trolling, in which an individual deliberately posts inflammatory or distressing messages in order to provoke, manipulate, or harm the emotional wellbeing of a targeted person (Fichman & Sanfilippo, 2016). Online trolls are often ordinary people (Cheng et al. 2017), not stalkers. But while it may be dismissed in popular discourse as mere mischief or “edgy” humour, sustained trolling—especially when personalised—can form part of a deeply traumatising campaign.

Another frequent tactic is doxing, where private or sensitive information is publicly released online with malicious intent (Douglas, 2016). This might include addresses, phone numbers, workplace details (Snyder et al. 2017), or personal (often nude) photos (MacAllister, 2016), often accompanied by explicit or implicit threats. Not seldom do doxing victims become doxing perpetrators themselves (Chen et al. 2019). Doxing is particularly harmful when it invites offline harassment or violence, or when it targets marginalised or vulnerable individuals.

Then there is impersonation, which involves forging identities—through fake email accounts, cloned social media profiles, or spoofed digital communication—to deceive or harm others (Goga et al. 2015). Impersonation can be used to cheat on an online test (see for example Gathuri et al. 2014), but also to spread misinformation, damage reputations, or lure victims into unsafe or humiliating situations (Simmons et al. 2020). As the examples below show, such acts are rarely abstract: they have tangible, terrifying consequences in the real world.

Lived Experiences of Cyberstalking

My motivation to explore the phenomenon of cyberstalking in an academic context stems not only from scholarly interest, but also from lived experience—both personal and observed within neurodivergent peer communities. In particular, conversations in support groups for autistic adults have repeatedly highlighted the unique vulnerabilities of neurodivergent individuals in digital environments, especially in contexts of trust, communication, and emotional safety (Mussies, 2024).

When I was eighteen, I was in a relationship with a boy who became increasingly jealous of my best friend. Without my knowledge, he created a new email account in her name and began placing sexually explicit advertisements on various websites. These listings included her real address. Men began arriving at her family home—where she lived with her parents and older brother—expecting a paid sexual encounter. The impact on her safety, reputation, and sense of security was profound and deeply traumatising for her. Suspicion soon fell on me, her best friend, as I was also involved in modelling and the IP addresses revealed that the ads had been posted from the PC of the boy I was dating. It was incredibly painful to be looked at with distrust by my best friend and her family. The fear, shame, and chaos this inflicted on all people involved was incalculable. And all of it was enabled by a few clicks on a keyboard.

Roughly ten years ago, I experienced a second, more technologically sophisticated incident, even more insidious in its psychological complexity. My then-partner’s email account was hacked, and I began receiving hate-filled messages, some of which appeared to come from the woman he was having an affair with. These included graphic death threats. It was later revealed that a spoofing tool had been used—software designed to make an email appear as though it originates from a legitimate, known source. The messages seemed to come from her university email address, giving them a false veneer of credibility and escalating the psychological harm. The emotional turmoil was overwhelming, as I found myself suffering more from the stalking than from the affair itself. I felt betrayed not only by my partner but also by the uncertainty and paranoia that followed. I blamed him, thinking that if he hadn’t cheated, this wouldn’t have happened. Yet, I couldn’t shake the feeling that he, too, held me responsible—for I had spoken about the affair in an autism group, which likely led someone from that group to take it upon themselves to invade my partner’s inbox, further complicating the pain I was already experiencing.

I share these two episodes here as cautionary tales of how digital tools can be weaponised to harm, because they are exemplary. For reasons of confidentiality, I cannot go into details of the many similar stories shared in the support groups for autistic adults I participate in, but it is clear that these are not isolated incidents. Among autistic individuals—particularly women and youngsters—there is a consistent pattern of increased vulnerability to online abuse, often linked to traits such as deep trust, literal communication, and a desire to connect (Macmillan et al. 2020). In the second case I described, however, the likely perpetrator was himself an autistic man. This intersection of neurodivergence and harmful behaviour complicates the narrative and calls for a nuanced, ethically responsible approach: one that neither pathologises nor excuses, but seeks to understand the broader social, psychological, and technological dynamics at play (Baciu & Worthington 2024). This complicates the narrative and calls for a compassionate yet critical approach: vulnerability can exist on both sides, and we must build a discourse that acknowledges harm without dehumanising the harmed or the harming.

Impact on Victims

Once dismissed as harmless or “not real,” online abuse is now recognised as a legitimate and deeply harmful form of violence (Lewis et al. 2017). Just as the old saying “sticks and stones may break my bones, but words will never hurt me” has been debunked by decades of psychological research (Sharp, 2014), we now understand that digital aggression can be just as neurologically damaging as physical harm (Nathan et al. 2023). Functional neuroimaging studies have shown that social pain—such as rejection, humiliation, or harassment—activates similar brain regions to those involved in physical pain, including the anterior cingulate cortex (Zhang et. al. 2019). In this light, cyberstalking cannot be minimised as mere ‘online drama’. The suffering it causes is real, embodied, and enduring (Laricchiuta et al. 2023).

Victims of cyberstalking frequently report heightened levels of anxiety, depressive symptoms, and emotional dysregulation (Begotti et al. 2020). Intrusive thoughts, sleep disturbances, and feelings of helplessness are common, especially when the perpetrator’s identity is unknown or the harassment is persistent and unpredictable (Nesin et al. 2025). In many cases, these psychological reactions escalate into clinical conditions such as generalised anxiety disorder (GAD) or major depressive disorder (MDD). Post-traumatic stress disorder (PTSD) is also prevalent, particularly when digital threats are accompanied by real-world stalking or perceived risk of physical violence. The digital nature of the abuse often contributes to a sense of inescapability, as victims may feel they are being watched or pursued across multiple platforms, even in the perceived safety of their own homes.

The behavioural aftermath of cyberstalking frequently includes hypervigilance and social withdrawal. Many victims report compulsively checking their digital presence, fearing further intrusion or escalation. Others feel compelled to delete social media accounts, change email addresses, or avoid online interactions altogether—actions which, while protective, can also result in social isolation and the loss of valuable support networks. These responses are not simply behavioural choices but are often neurobiologically driven, reflecting adaptations to perceived threat and trauma. For neurodivergent individuals, the toll may be even more profound. The loss of online safe spaces—often key to social connection, self-expression, and identity formation—can be especially destabilising. Avoidance of digital platforms may also hinder academic, professional, or creative pursuits, further entrenching the impact of the abuse.

Importantly, the impact of cyberstalking does not end when a laptop is closed or a smartphone is switched off. In a society increasingly reliant on digital infrastructure for everything from personal relationships to government access, complete withdrawal from online life is neither practical nor possible. Moreover, the stress generated by ongoing digital harassment becomes embedded in the body, activating chronic physiological responses that affect mental and physical health alike. Understanding the neuropsychological toll of online harassment is essential for shaping effective legal frameworks, support systems, and preventive interventions.

The Role of Technology and Artificial Intelligence in Cyberstalking

Reflecting on my own lived experiences with cyberstalking, it is unsettling to consider the devastating potential, had my stalkers back then possessed access to today’s Large Language Models or generative AI tools. The harm they inflicted with relatively limited digital means was already severe; if they had been able to generate convincing fake messages, imitate voices, or manipulate images and videos with the ease now afforded by deep learning technologies, the consequences could have been far more psychologically and socially destructive. This sobering thought underscores the urgent need for nuanced research and regulation surrounding the dual-edged role of AI in online abuse.

At the same time, these very technologies also present a powerful opportunity to combat cyberstalking in innovative ways. Artificial intelligence can be a valuable ally in recognising harmful behavioural patterns before they escalate into sustained harassment. Algorithms trained on large datasets of online interactions can identify subtle yet consistent markers of abusive intent—such as obsessive commenting, unsolicited messaging, or the use of threatening language. Platforms are increasingly experimenting with AI-driven systems that flag suspicious behaviour for human moderation, thus increasing response speed and accuracy.

Machine learning, in particular, enables systems to move beyond keyword detection and towards contextual understanding. Rather than relying on static lists of ‘prohibited’ terms, these models can be trained to recognise when the tone, frequency, and direction of messages suggest a stalking pattern. For instance, repeated contact attempts after being blocked, or persistent mentions of a target across different platforms, can be automatically highlighted for review. Such tools are already being developed in the cybersecurity sector and offer promising applications in both prevention and intervention strategies.

However, as we empower AI to distinguish between benign and harmful digital behaviours, we also blur the lines of what constitutes 'normal' online interaction. The Turing Test, originally devised to evaluate a machine’s ability to mimic human conversation, finds a new relevance here. When automated systems judge online communication, the question becomes: who decides what is human, what is acceptable, and what is threatening? This ambiguity invites further philosophical and ethical reflection.

Ethically, the deployment of AI for behavioural monitoring presents its own challenges. While victims of cyberstalking may benefit greatly from earlier intervention, there is an inherent tension between safety and surveillance. Privacy rights must be carefully balanced against the collective need for protection. Transparency in algorithmic decision-making, data governance, and appeal mechanisms is essential to prevent the misuse or overreach of these tools. Furthermore, particular caution must be exercised to ensure that neurodivergent users—whose communication styles may deviate from normative standards—are not unfairly targeted by automated systems.

Ultimately, AI is neither inherently liberating nor inherently dangerous—it is a tool shaped by the values and intentions of those who design and deploy it. In the context of cyberstalking, its potential for both harm and healing demands critical examination, robust dialogue, and ethical foresight.

Legal Implications and Legislation on Cyberstalking

Cyberstalking presents a complex legal challenge due to the evolving nature of technology and the diversity of behaviours it encompasses. In the United States, all 50 states have enacted anti-stalking legislation, alongside federal provisions such as the Computer Fraud and Abuse Act (CFAA). However, these legal frameworks vary significantly in their definitions and scope. While the CFAA primarily targets unauthorised access to computers, it is occasionally invoked in cases involving cyberstalking, particularly where there is evidence of digital intrusion. In Europe, the regulatory landscape differs by country. My home country The Netherlands, for instance, includes stalking under Article 285b of the Dutch Penal Code, which criminalises persistent following, harassment, or other intrusions upon an individual’s privacy. Yet, despite such provisions, cyberstalking often falls into grey areas where technological nuance outpaces statutory clarity.

Definitional Challenges

A central difficulty in legislating against cyberstalking lies in the lack of a universally accepted definition of stalking itself. Meloy (2001) describes stalking as “a repetitive pattern of unwanted and threatening behaviour”, which may include telephone harassment, unwanted gifts, or other forms of intrusive contact. Scholars such as Malsch (2004) and Baas (2003) have further highlighted the behavioural diversity that complicates a precise legal classification, with consistent themes being obsessive focus on a specific individual, repeated unwanted contact, and a resultant sense of fear or anxiety in the victim.

Verkaik and Pemberton (2001) describe stalking in clinical-psychological terms as behaviour that is (a) repetitive, (b) directed at a specific individual, (c) experienced by the victim as intrusive and unwanted, and (d) causes fear. This conceptualisation spans both physical and digital spaces, emphasising the emotional and psychological impact rather than focusing solely on individual acts. An important related question is where stalking ends and bullying begins (Sheldon et al. 2019). Both involve repeated intrusion, often with the intent to cause fear or distress. Bullying typically implies a group dynamic and can occur in social or institutional settings, while stalking is more often dyadic and rooted in obsession or control. However, one-on-one bullying—especially online—may mimic the patterns and impacts of stalking, and the two may overlap more than legal definitions suggest.

As explained above, cyberstalking lacks a widely accepted definition (Wilson e.a. 2022). It rarely occurs in isolation from other forms of stalking. As Spitzberg and Cupach (2007) note, it is part of a broader repertoire of stalking behaviours that include hyperintimacy (e.g., sending excessive or overly romantic gestures), indirect digital contact (emails, messages, social media interactions), surveillance, direct confrontation, coercion, and even physical aggression. In this light, the internet serves less as a distinct realm of harassment than as an extension of offline control, providing new tools for stalkers rather than representing a fundamentally separate category. Ferris and Gardner, cited by Finch (2001), argue that many instances of stalking emerge from the breakdown of ordinary relational processes: forming and dissolving romantic relationships. This observation aligns with findings that the majority of stalking victims are former partners, with stranger stalking accounting for only one in four cases.

The Role of Gender and Power Dynamics

Stalking is a profoundly gendered phenomenon. The National Violence Against Women Survey found that one in 12 women and one in 45 men have been stalked during their lifetimes. Approximately 70–80% of stalkers are men, with women being both the most common victims and the least likely to be taken seriously as perpetrators. Meloy (2001) notes that criminal justice responses are more robust when the accused is male and the victim female, while women accused of stalking are less likely to face prosecution. This reflects broader patterns of gendered assumptions about threat, obsession, and agency (see also Mullen & Pathé 1994).

One notable subtype of stalking—erotomanic stalking—is disproportionately perpetrated by women, and often rooted in a delusional belief in a romantic connection (Orion, 2001). The “Obsessive Love Wheel” (Moore, 2003) offers a conceptual model for understanding such cases, depicting an unending cycle of obsession, idealisation, and intrusive behaviour. Outlining four stages of obsessive love - attraction, anxious, obsession, and distraction - the first stage is marked by an intense, often one-sided attraction, leading to idealisation and the desire for immediate intimacy. As the relationship progresses, feelings of fear and distrust emerge in the "anxious" and "obsession" stages, characterised by excessive monitoring (Mussies, 2013).

Legal Gaps and Evidentiary Challenges

Digital stalking raises unique evidentiary challenges  (Al-Khateeb et al. 2017). The use of anonymous accounts, encryption, and VPNs can make it difficult to trace perpetrators. Even when evidence is available—screenshots, messages, login data—it may be insufficient to meet legal thresholds for prosecution, especially when behaviours are perceived as non-threatening in isolation. Moreover, the distinction between persistent online interaction and criminal stalking is often blurred. According to Sheridan e.a. (2003), "stalking is an extraordinary crime, given that it may often consist of no more than the targeted repetition of an ostensibly ordinary or routine behavior." Such acts—liking a social media post, commenting repeatedly, or monitoring a profile—may seem innocuous, yet in context they can form part of a campaign of psychological coercion and fear.

Given the rapid evolution of technology, there is a growing consensus among scholars and policymakers that stalking legislation must be revisited. The current frameworks often lag behind the lived realities of digital abuse, particularly for neurodiverse and marginalised individuals, whose boundaries and experiences of intrusion may differ from normative assumptions. Artificial intelligence and digital forensic tools offer both opportunities and risks. While AI could assist in pattern recognition and early identification of stalking behaviours, it also raises concerns around privacy, profiling, and false positives. The key question remains: how can law balance the need to protect individuals from psychological harm without overreaching into legitimate expression or social interaction?

Neuropsychological Approach to Victims: A Comprehensive Analysis of the Impact

The neuropsychological consequences of cyberstalking are profound and multifaceted, as prolonged exposure to digital aggression induces significant alterations in the brain’s functioning, particularly in relation to chronic stress, social cognition, and coping mechanisms. From a neuroscientific viewpoint, repeated exposure to threatening or harassing stimuli activates the brain’s stress response systems (Bloomfield et al. 2019). The amygdala, a key structure involved in fear processing, becomes hyperresponsive in individuals who experience chronic threat, whether physical or digital. Over time, this heightened amygdalar activity may lead to alterations in the hypothalamic–pituitary–adrenal (HPA) axis, contributing to long-term dysregulation of cortisol and other stress hormones (Mbiydzenyuy & Qulu 2024). These changes have been associated with increased susceptibility to anxiety and mood disorders.

Moreover, the hippocampus—central to memory consolidation and contextual awareness—can be adversely affected by chronic stress (Kim et al. 2015; Kim & Kim 2023). Victims of cyberstalking often report a wide-ranging psychological impact (Jansen van Rensburg, 2017). This can include signs of memory fragmentation and difficulties with attention and executive functioning, which can impair their ability to work, study, or maintain social relationships (Worsley et al. 2017). In neurodivergent individuals, particularly those with heightened sensory sensitivity or difficulties with cognitive flexibility, these impacts may be amplified, leading to intensified emotional and behavioural responses.

Oliver Sacks, in his seminal works, eloquently described the unique and often profound ways in which individuals with neurological differences, such as autism, experience the world. His insights into the neuropsychological dimensions of individual experience underscore the complexity of trauma, especially for those whose cognitive frameworks may differ from the neurotypical population. In this vein, "autiethnography" (Mussies 2020), offers a reflective, introspective approach to understanding the self through a blend of personal narrative and academic exploration, and can provide valuable insights into the neuropsychological effects of cyberstalking.

I now know, in hindsight, that I dissociated during that time. I couldn’t hold onto reality, couldn’t trust it and only remember fragments of it. I did EMDR therapy, multiple sessions, but it only helped somewhat – it’s as if parts of that time have dissolved into fog. It was too confusing, and I wasn’t stable. My entire sense of what was real shifted (Mussies, personal correspondence, April 2025).

The lived experiences described in this vignette aren’t drama or attention-seeking, they were an involuntary response to real events that overwhelmed me. The experience of losing grip on reality is a hallmark of victims of stalking, particularly those who endure sustained and targeted psychological harassment (Pathé, 2002). Such states are not merely emotional reactions; they are the result of profound neuropsychological disturbances. In many cases, the victims of cyberstalking, driven by their need to understand and regain control, may become participants in a destructive game of online cat and mouse. In attempting to unmask the identity of the perpetrator, they may find themselves engaging in behaviours that mirror those of the stalker, obsessively searching, interrogating, and sometimes even attempting to manipulate the situation to gather information. This cyclical engagement in pursuit of the perpetrator may blur the lines between victim and perpetrator, creating a complex psychological dynamic where the victim's sense of self is increasingly fractured.

This distortion of reality is framed neurologically by the brain's response to chronic stress. As mentioned earlier, the human brain is exquisitely sensitive to stress, particularly in situations of prolonged psychological harassment. The chronic activation of the hypothalamic-pituitary-adrenal (HPA) axis leads to elevated cortisol levels, which, over time, can alter brain regions like the hippocampus, responsible for memory and emotional regulation. This chronic stress response impairs cognitive functions, including attention, memory, and decision-making, making it more difficult for the victim to process and interpret reality accurately. The limbic system, especially the amygdala, becomes hyperactive, heightening feelings of fear and anxiety. This neurobiological cascade perpetuates a state of hypervigilance, where the victim remains on constant alert, even when the stalking behaviour may have subsided.

The impact of cyberstalking extends beyond the cognitive and emotional realms, affecting social cognition and relationships. The pervasive nature of digital aggression disrupts the ability to engage in healthy social interactions, causing a shift in how the victim perceives both their current and potential relationships. The distorted view of trust and safety, born from repeated negative experiences in the digital space, leads to a phenomenon of "digital estrangement." Victims often experience a profound sense of alienation and emotional detachment from others, unable to trust social cues or interpret the intentions of others. Over time, this erosion of trust undermines the victim’s ability to form meaningful connections, both online and offline, resulting in increased social anxiety, a diminished sense of safety, and a fractured social support network. This dynamic deepens the psychological toll of cyberstalking, as the victim's emotional and cognitive resources are depleted by both the harassment and the constant struggle to make sense of a fragmented and unstable reality.

Psychological Adaptations and the Potential Role of AI

In response to the heightened psychological stressors induced by cyberstalking, victims employ a diverse array of coping mechanisms, often shaped by their individual resilience, personality traits, and the intensity of the harassment. These strategies are frequently aimed at reducing the immediate emotional burden, though their long-term efficacy remains uncertain. Among the more common responses are emotional numbing, social withdrawal, and hypervigilance. These mechanisms can provide temporary relief by reducing emotional engagement with the source of distress, but they often exacerbate the victim’s mental health challenges over time. For instance, emotional numbing may protect the individual from the immediate pain of cyberstalking but can also impede the natural emotional processing needed for recovery. Similarly, social withdrawal, though providing temporary respite, can lead to further isolation, compounding the sense of alienation that many victims experience.

One cognitive-behavioural approach that has shown promise is reframing, in which individuals attempt to alter their perception of the stalking event in a way that lessens its emotional impact. Additionally, seeking social support is another critical strategy that can foster resilience; however, these methods may become progressively less effective as the harassment persists. The sustained nature of cyberstalking often creates an environment where the victim’s coping resources are depleted, making it increasingly difficult to utilise these strategies effectively. In these cases, there is a significant need for external support mechanisms that can bolster the victim’s ability to cope with ongoing trauma.

In recent years, advances in artificial intelligence (AI) have begun to offer promising solutions for victims of cyberstalking, particularly in the form of personalised interventions. AI has the potential to revolutionise both the prevention and support of cyberstalking victims by providing tailored, real-time assistance that adapts to the unique needs of the individual. AI-powered mental health applications are at the forefront of these developments, offering victims the ability to receive personalised psychological support. These apps use sophisticated algorithms to track emotional responses and behavioural patterns, enabling them to provide real-time feedback and recommend appropriate coping strategies based on the user’s ongoing emotional state. For instance, AI-driven platforms can analyse a user’s responses to certain stressors—such as triggers in online interactions—and suggest relaxation techniques, mindfulness exercises, or cognitive restructuring strategies to mitigate anxiety and emotional distress.

One promising application of AI is the development of virtual therapists or chatbots, which can offer immediate, 24/7 support for victims of cyberstalking. These AI-driven tools are designed to simulate therapeutic conversations, providing a safe and confidential space for victims to express their feelings and process the emotional fallout of their experiences (Khawaja & Bélisle-Pipon, 2023). Through natural language processing (NLP) and sentiment analysis, these systems can identify emotional cues in the victim’s communication, offering empathetic responses or prompting coping strategies tailored to their immediate emotional state. Such tools can be particularly invaluable for victims who may feel isolated, as they provide instant access to support in times of distress, reducing the emotional burden of waiting for professional help.

In addition to supporting victims emotionally, AI also holds great potential for preventing cyberstalking by identifying harmful behaviours in real time (Sinha & Sinha, 2024). By leveraging machine learning algorithms, AI can monitor online platforms for patterns of malicious activity, such as repeated threats, harassment, or manipulative tactics commonly employed by cyberstalkers. These algorithms can be trained on vast datasets of online interactions, allowing them to detect subtle, emerging signs of cyberstalking that may not be immediately obvious to human moderators. For example, AI systems can flag unusual communication patterns, such as a sudden increase in aggressive messaging or the use of specific language associated with manipulation or control, triggering automatic alerts or intervention mechanisms. Early identification of these behaviours can enable both victims and platform administrators to take proactive measures before the situation escalates further, such as blocking the perpetrator, issuing warnings, or providing resources for the victim.

Moreover, AI has the capacity to enhance collaboration between technology platforms, support organisations, and, where appropriate, law enforcement, by identifying patterns of online harassment and facilitating timely interventions. Predictive algorithms and natural language processing tools could assist in flagging repeated forms of unwanted contact, impersonation, or doxing behaviour. However, such developments must be approached with caution. Surveillance technologies, even when intended for protection, risk being disproportionately deployed against already marginalised populations, including neurodivergent individuals, racialised communities, and gender minorities. Without transparent protocols, robust accountability mechanisms, and input from affected groups, there is a significant risk that AI tools meant to support victims could reinforce systems of control, rather than liberation. Thus, any deployment of AI in this context must be guided by ethical principles of proportionality, privacy, and participatory design.

The integration of AI into the broader ecosystem of cyberstalking prevention and support presents both opportunities and challenges. While AI can undoubtedly improve the detection of harmful behaviours and provide immediate emotional support for victims, it is essential to address issues related to data privacy, ethical concerns, and the need for human oversight. For AI systems to be effective, they must be trained on diverse datasets that reflect the full range of experiences faced by cyberstalking victims, ensuring that interventions are appropriately personalised and effective. Additionally, there must be careful consideration of how AI systems handle sensitive data, ensuring that victims' privacy is protected and that their information is not exploited or misused.

In conclusion, AI represents a transformative tool in both the prevention and support of cyberstalking victims. By offering personalised, real-time interventions, AI can significantly enhance the coping strategies available to victims, while also providing a proactive approach to identifying and mitigating harmful behaviours online. As technology continues to advance, the potential for AI to become an integral part of the fight against cyberstalking grows, providing new avenues for prevention, support, and recovery. However, as with any technological intervention, its successful implementation will require careful attention to ethical considerations, data privacy, and the integration of human expertise to ensure that the needs of victims are met with the utmost care and precision.

Last but not least, recent scholarship has emphasised the need for interdisciplinary models that account for both social and biological contributors to cyber violence. In this regard, Owen’s (2014/2016) Genetic-Social framework offers a valuable conceptual lens, introducing meta-constructs such as the biological variable, psychobiography, and neuro-agency to highlight how genetic disposition, neurological functioning, and environmental triggers may interact to produce violent online behaviour. Rather than endorsing a reductive form of genetic determinism, this framework advocates a multifactorial and ontologically flexible approach—one that accommodates both embodied cognition and the influence of external structures such as digital environments. When applied to cyberstalking, this perspective enriches our understanding of how neurobiological vulnerabilities, especially under conditions of chronic stress and digital disinhibition, may shape not only the victim’s psychological response but also the perpetrator’s compulsions. As such, predictive models grounded in this framework could enhance both preventative and rehabilitative interventions—particularly if integrated with ethically guided forms of artificial intelligence capable of detecting emergent behavioural patterns and offering early, personalised support.

Conclusion

Cyberstalking is neither a trivial annoyance nor a purely technological challenge—it is a form of violence with deeply embedded psychological, social, and legal consequences. This article has demonstrated how digital aggression is not only neurologically comparable to physical harm but can also reshape an individual’s perception of safety, identity, and reality itself. For neurodivergent individuals in particular, the vulnerability to such attacks is heightened, demanding not only empathy but also tailored, informed interventions. The legal system continues to lag behind technological advances, and definitional ambiguities persist—yet innovation in artificial intelligence offers both promising tools and ethical dilemmas in the detection, prevention, and support of victims.

To move forward, a multidisciplinary approach is vital—one that integrates neuroscience, legal reform, platform accountability, and survivor-informed narratives. The path to justice must include not only stronger laws and technological safeguards, but also recognition of the psychological scars left behind. By acknowledging the complexity of both the harm and those who are harmed, we open the door to more compassionate and effective solutions.

Future research should focus on the development of neurodivergent-inclusive trauma support systems, ethically governed AI models for early detection of digital abuse, and comparative legal studies to identify best practices across jurisdictions. Policymakers must prioritise regulatory frameworks that protect without over-surveilling, and funding should be directed towards cross-sector collaborations between researchers, digital platforms, mental health professionals, and advocacy groups. In the age of AI, the shadows cast by digital violence must not go unnoticed. It is time we illuminate them.

References

Al-Khateeb, H. M., Epiphaniou, G., Alhaboby, Z. A., Barnes, J., & Short, E. (2017). Cyberstalking: Investigating formal intervention and the role of Corporate Social Responsibility. Telematics and Informatics, 34(4), 339-349.

Baciu, A., & Worthington, R. (2024). Is there a link between neurodiversity and stalking? A systematic review. The Journal of Forensic Practice.

Begotti, T., Bollo, M., & Acquadro Maran, D. (2020). Coping strategies and anxiety and depressive symptoms in young adult victims of cyberstalking: A questionnaire survey in an Italian sample. Future Internet, 12(8), 136.

Bloomfield, M. A., McCutcheon, R. A., Kempton, M., Freeman, T. P., & Howes, O. (2019). The effects of psychosocial stress on dopaminergic function and the acute stress response. Elife, 8, e46797.

Chayko, M. (2014). Techno‐social life: The internet, digital technology, and social connectedness. Sociology Compass, 8(7), 976-991.

Chen, M., Cheung, A. S. Y., & Chan, K. L. (2019). Doxing: What adolescents look for and their intentions. International journal of environmental research and public health, 16(2), 218.

Cheng, J., Bernstein, M., Danescu-Niculescu-Mizil, C., & Leskovec, J. (2017). Anyone can become a troll: Causes of trolling behavior in online discussions. 1217-1230.

Douglas, D. M. (2016). Doxing: A conceptual analysis. Ethics and information technology, 18(3), 199-210.

Dreßing, H., Bailer, J., Anders, A., Wagner, H., & Gallas, C. (2014). Cyberstalking in a large sample of social network users: Prevalence, characteristics, and impact upon victims. Cyberpsychology, Behavior, and Social Networking, 17(2), 61-67.

Fichman, P., & Sanfilippo, M. R. (2016a). Online trolling and its perpetrators: Under the cyberbridge. Rowman & Littlefield.

Fichman, P., & Sanfilippo, M. R. (2016b). Online trolling and its perpetrators: Under the cyberbridge. Rowman & Littlefield.

Gathuri, J. W., Luvanda, A., Matende, S., & Kamundi, S. (2014). Impersonation challenges associated with e-assessment of university students. Journal of Information Engineering and Applications, 4(7), 60-68.

Goga, O., Venkatadri, G., & Gummadi, K. P. (2015). The doppelgänger bot attack: Exploring identity impersonation in online social networks. 141-153.

Graham, M., & Dutton, W. H. (2019). Society and the internet: How networks of information and communication are changing our lives. Oxford University Press.

Hancock, J. T., & Bailenson, J. N. (2021). The social impact of deepfakes. Cyberpsychology, behavior, and social networking, 24(3), 149-152.

Kaur, P., Dhir, A., Tandon, A., Alzeiby, E. A., & Abohassan, A. A. (2021). A systematic literature review on cyberstalking. An analysis of past achievements and future promises. Technological Forecasting and Social Change, 163, 120426.

Khawaja, Z., & Bélisle-Pipon, J.-C. (2023). Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186.

Kim, E. J., & Kim, J. J. (2023). Neurocognitive effects of stress: A metaparadigm perspective. Molecular psychiatry, 28(7), 2750-2763.

Kim, E. J., Pellman, B., & Kim, J. J. (2015). Stress effects on the hippocampus: A critical review. Learning & memory, 22(9), 411-416.

Laricchiuta, D., Panuccio, A., Picerni, E., Biondo, D., Genovesi, B., & Petrosini, L. (2023). The body keeps the score: The neurobiological profile of traumatized adolescents. Neuroscience & biobehavioral reviews, 145, 105033.

Lewis, R., Rowe, M., & Wiper, C. (2017). Online abuse of feminists as an emerging form of violence against women and girls. British journal of criminology, 57(6), 1462-1481.

MacAllister, J. M. (2016). The doxing dilemma: Seeking a remedy for the malicious publication of personal information. Fordham L. Rev., 85, 2451.

Macmillan, K., Berg, T., Just, M., & Stewart, M. (2020). Are autistic children more vulnerable online? Relating autism to online safety, child wellbeing and parental risk management. 1-11.

Mbiydzenyuy, N. E., & Qulu, L.-A. (2024). Stress, hypothalamic-pituitary-adrenal axis, hypothalamic-pituitary-gonadal axis, and aggression. Metabolic brain disease, 1-24.

Mullen, P. E., & Pathé, M. (1994). Stalking and the pathologies of love. Australian & New Zealand Journal of Psychiatry, 28(3), 469-477.

Mussies, M. (z.d.). Polifemo (MA thesis Musicology), Utrecht 2013. Geraadpleegd 9 april 2025, van https://www.academia.edu/35508721/Polifemo_MA_thesis_Musicology_Utrecht_2013

Mussies, M. (2020). Autiethnography. Transformative Works and Cultures, 33. https://doi.org/10.3983/twc.2020.1789

Mussies, M. (2024). Inside the Autside: A Misfit Manifesto [Revised Edition].

Nesin, S. M., Sharma, K., Burghate, K. N., & Anthony, M. (2025). Neurobiology of emotional regulation in cyberbullying victims. Frontiers in Psychology, 16, 1473807.

Ogilvie, E. (2000). Cyberstalking. Trends & Issues in Crime & Criminal Justice, 166.

Orion, D. (2001). Erotomania, stalking, and stalkers: A personal experience with a professional perspective. Stalking crimes and victim protection: Prevention, intervention, threat assessment, and case management, 69-80.

Owen, T. (2014). Criminological theory: A genetic-social approach. Springer.

Owen, T. (2016). Cyber-violence: Towards a predictive model, drawing upon genetics, psychology and neuroscience. International Journal of Criminology and Sociological Theory, 9(1), 1-11.

Pathé, M. (2002). Surviving stalking. Cambridge University Press.

Sharp, W. (2014). Sticks and stones may break my bones, but what about words? International journal of group psychotherapy, 64(3), 280-296.

Sheldon, P., Rauschnabel, P. A., & Honeycutt, J. M. (2019). Cyberstalking and bullying. The dark side of social media, 43, 58.

Sheridan, L. P., Blaauw, E., & Davies, G. M. (2003). Stalking: Knowns and unknowns. Trauma, Violence, & Abuse, 4(2), 148-162.

Sheridan, L. P., & Grant, T. (2007). Is cyberstalking different? Psychology, crime & law, 13(6), 627-640.

Short, E., Guppy, A., Hart, J. A., & Barnes, J. (2015). The impact of cyberstalking. Studies in Media and Communication, 3(2), 23-37.

Siemieniecka, D., & Skibińska, M. (2019). Stalking and cyberstalking as a form of violence. 3, 403-413.

Simmons, M., & Lee, J. S. (2020). Catfishing: A look into online dating and impersonation. 349-358.

Sinha, P. C., & Sinha, R. (2024). Cloud Computing and AI for Cyberstalking Prevention: A Comprehensive Approach. 32-38.

Snyder, P., Doerfler, P., Kanich, C., & McCoy, D. (2017). Fifteen minutes of unwanted fame: Detecting and characterizing doxing. 432-444.

Spitzberg, B. H., & Hoobler, G. (2002). Cyberstalking and the technologies of interpersonal terrorism. New media & society, 4(1), 71-92.

Waldman, A. E. (2018). Safe social spaces. Wash. UL Rev., 96, 1537.

Wilson, C., Sheridan, L., & Garratt-Reed, D. (2022). What is cyberstalking? A review of measurements. Journal of interpersonal violence, 37(11-12), NP9763-NP9783.

Worsley, J. D., Wheatcroft, J. M., Short, E., & Corcoran, R. (2017). Victims’ voices: Understanding the emotional impact of cyberstalking and individuals’ coping responses. Sage open, 7(2), 2158244017710292.

Zhang, M., Zhang, Y., & Kong, Y. (2019). Interaction between social pain and physical pain. Brain Science Advances, 5(4), 265-273.

By Martine Mussies

martine mussies 2025 - Martine Mussies.jpg

Martine Mussies is an artistic researcher and autistic academic based in Utrecht, the Netherlands. She is a PhD candidate at the Centre for Gender and Diversity at Maastricht University, where she is writing her dissertation on The Cyborg Mermaid. Martine is also part of SCANNER, a research consortium aimed at closing the knowledge gap on sex differences in autistic traits. In her #KingAlfred project, she explores the online afterlives of King Alfred the Great, and she is currently working to establish a Centre for Asia Studies in her hometown of Utrecht. Beyond academia, Martine is a musician, budoka, and visual artist. Her interdisciplinary interests include Asia Studies, autism, cyborgs, fan art and fanfiction, gaming, medievalisms, mermaids, music(ology), neuropsychology, karate, King Alfred, and science fiction. More at: www.martinemussies.nl and LinkedIn.

Disclaimer: The International Journal for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.

bottom of page