Click here to read our latest publication "Abortion in Brazil: Race, the Right, and the Revival of Colonialism" by Theodore Hunt
Home > Publications > "Bridging Law and Algorithms: A Roadmap for AI Assisted Content Moderation to Combat Online Gender-Based Harassment in Namibia"
August 18th 2025
Bridging Law and Algorithms: A Roadmap for AI Assisted Content Moderation to Combat Online Gender-Based Harassment in Namibia
By Juliet Madamombe

Juliet Madamombe is an entrepreneur, business consultant and author based in Windhoek, Namibia. She is a Doctor of Business Administration candidate at Namibia Business School (UNAM), where her dissertation investigates China's Belt and Road Initiative (BRI) for green hydrogen and renewable energy development in Namibia: opportunities for economic growth and energy security. Juliet is the owner and founder of Phoenix Alliance Group (Pty) Ltd, Global Edge Training Institute, and Danllet Real Estate. Beyond academia, Juliet writes business columns for The New Era newspaper in Namibia. She has published an article titled Assessing Innovative Capabilities in the Namibian Road Freight Transport with the International Journal of Research & Innovation in Social Science. Find Juliet Madamombe on LinkedIn.

Gender-based cyber-harassment (GBCH) is eroding the mental health, livelihoods and civic participation of Namibian women and girls, yet the country’s legal and institutional responses remain fragmented. The Electronic Transactions Act 4 of 2019 provides a useful foundation for e-evidence and takedown notices, but the long-awaited Cybercrime Bill, which would criminalize cyber-stalking, non-consensual image sharing and online hate, has stalled in Parliament. In contrast, jurisdictions such as the European Union, India, China and Australia already oblige social-media platforms to deploy artificial-intelligence (AI) tools that automatically detect and remove harmful content. Drawing on recent comparative evidence, this article maps Namibia’s regulatory gap, explains the state of the art in AI-driven content moderation, and proposes a five-pillar policy roadmap that connects smart technology with victim-centred protection, transparency and due-process safeguards. By adapting global best practice to local linguistic and infrastructural realities, Namibia can create safer digital spaces without sacrificing fundamental rights.
The incidence of online abuse targeting women is increasing throughout sub-Saharan Africa. In Namibia, doxing, "revenge porn" leaks, stalking, and organized trolling increasingly pervade WhatsApp groups, Facebook pages, and X (previously Twitter) threads. Victims seldom receive prompt takedowns or police support, partly due to the ambiguity of current protocols for gathering digital evidence and filing charges. Concurrently, machine learning techniques adept at identifying poisonous language, vile memes, or violations of intimate images are advancing swiftly, and some legislators now mandate platforms to implement them for the public benefit. The primary inquiry examined herein is so unequivocal: How can Namibia utilize AI-assisted content moderation to mitigate gender-based cyber harassment while upholding free speech and privacy protections?
​
The Electronic Transactions Act of Namibia, enacted in 2019, confers legal validity to electronic documents and establishes notification and takedown protocols for service providers. However, it was formulated prior to the widespread exploitation and generative AI manipulation that transformed online risk. A dedicated Cybercrime Bill, initially proposed in 2013 and subsequently amended multiple times, aims to criminalize cyberstalking, phishing, ransomware, and non-consensual pornography; nevertheless, its advancement has been sluggish. In July 2025, the Minister of Information and Communication Technology said that the bill's enactment had been postponed due to a deficiency of domestic cyber legal knowledge. Meanwhile, victims must depend on standard civil law remedies, including defamation and general harassment statutes. These impose the responsibility on complainants to identify anonymous offenders, retain intricate digital evidence, and engage in costly litigation. Voluntary policies of platforms, frequently adopted directly from global headquarters, are rarely adjusted for local languages or cultural differences. The outcome is a protection gap precisely where damage is significantly increasing.
​
Numerous jurisdictions have taken definitive action to require or aggressively advocate for automatic moderation. The European Union's Digital Services Act, effective from February 2024, designates very large online platforms as systemic risk operators and mandates them to prove the efficacy and proportionality of AI tools employed to monitor illegal or harmful content, alongside submitting annual risk assessment audits (Palka, 2024). India's Intermediary Guidelines and Digital Media Ethics Code of 2021 mandate that substantial social media intermediaries implement automated systems to proactively identify child sexual abuse imagery and representations of sexual violence; preliminary assessments indicate that these tools expedite content removal but also elicit recurring concerns regarding bias (Prakash & Bhandari, 2022). The 2022 Regulations on Algorithmic Recommendation Services in China mandate that providers register filtering algorithms and guarantee that recommendations do not include illegal content, thereby integrating state-supervised AI into all significant platforms (Zhang, 2024). The Online Safety Act of Australia imposes comparable responsibilities on search engines, social media platforms, and AI chatbots, including technical safeguards such face age verification to protect kids from exposure to pornography and extreme violence (Harkin, 2024). These examples collectively illustrate that AI-assisted moderation is no longer experimental; it has established a regulatory standard across several legal traditions.
​​​
Recent design science research suggests that hate speech detectors integrating big language models with explainability layers, such as SHAP or LIME, can enhance moderator confidence and decrease effort by one-third (Bunde, 2021). A 2025 study assessing multimodal models across five social media platforms indicated precision rates over ninety percent when integrating text, image, and user behavior data (Manche, Samaah, Tejaswini, & Myakala, 2025). Nonetheless, language coverage continues to pose a significant challenge: models predominantly trained in English sometimes exhibit subpar performance when faced with Afrikaans, Oshiwambo, or Nama slang, resulting in false negatives. Precision enhances when sentiment alterations, repost frequency, and network configuration are included, as organized harassment frequently adheres to a discernible pattern. Nonetheless, even the most sophisticated algorithms necessitate human oversight for appeals, humor, and political discourse, underscoring that AI serves as an assistance rather than a replacement for human discernment.
​
Namibia has numerous foundational elements for a credible response. The Electronic Transactions Act legally acknowledges electronic evidence and establishes removal standards. The Communications Regulatory Authority of Namibia (CRAN) and the national Computer Security Incident Response Team (NAM CSIRT) issue threat alerts and coordinate responses to incidents. Technologically, 4G and forthcoming 5G networks afford customers high-bandwidth mobile data, while nearby Amazon Web Services and Microsoft Azure nodes supply cloud computing facilities for local startups.
​
Nonetheless, the disparities are significant. In the absence of the Cybercrime Bill, there exists no criminal offense specifically targeting online gender violence, nor a legal obligation for platforms to provide adequate automated detection measures. Law enforcement agencies lack specialized cybercrime teams, and judicial systems lack standardized digital forensics protocols. Domestic AI research in indigenous languages is nascent, and the cadre of cybersecurity experts is limited. Civil society organizations like Sister Namibia promote awareness of online abuse; nonetheless, legal aid for digital rights is scarce, and cyber safety courses in educational institutions are inconsistent.
​
The initial pillar is legislative reform. Expediting the Cybercrime Bill is imperative; however, the legislation must be revised to incorporate explicit offenses for gender-based cyber harassment and to mandate service providers to implement appropriate automated detection technologies. The act must include due process protections derived from the Digital Services Act, including forty-eight-hour appeal periods for disputed takedowns and transparency requirements for algorithmic decision-making.
The second pillar is institutional capability. A specialized Digital Gender Violence Desk should be established within NAM CSIRT, authorized to collect victim evidence, arrange preservation orders with platforms, and communicate with prosecutors. Concurrently, magistrates and law enforcement officials necessitate specialized training in the acquisition and admissibility of digital evidence, utilizing frameworks established by South Africa’s Cybercrimes Act of 2021. The third pillar underscores the need of technological localization. Government and donor- funded subsidies can motivate Namibian institutions and startups to create annotated corpora in Afrikaans, Oshiwambo, and Nama. Open model licenses would enable worldwide platforms to incorporate these classifiers at minimal expense, thereby diminishing bias and enhancing detection precision.
Transparency and accountability are the fourth pillar. Platforms with over fifty thousand Namibian users must issue quarterly safety reports that include flagging rates, false positive ratios, and median removal timeframes, analogous to India's compliance reports. Such disclosures would enable regulators, researchers, and civil society organizations to assess performance and identify systemic issues.
Ultimately, victim empowerment finalizes the framework. A comprehensive digital justice platform should allow survivors to collect forensic screenshots, receive automated legal advice, and connect with counselors. Zero-rated mobile data agreements with telecommunications providers would guarantee that expenses do not impede access.
AI systems inherently possess hazards. Algorithmic bias can result in the excessive suppression of lawful discourse, particularly when minority dialects are inadequately represented in training datasets. Conducting regular bias audits utilizing local test suites, alongside transparent appeal methods, helps alleviate this issue. Deep packet inspection or intrusive scanning may compromise privacy and inhibit expression; restricting analysis to user-generated content and applying data minimization standards aids in safeguarding rights. Smaller domestic platforms may face challenges with compliance expenses; therefore, implementing tiered obligations— such as fundamental keyword filtering for services with fewer than ten thousand users—is recommended. Due to offenders' continual adaptation through the use of coded language or the insertion of text within visuals, ongoing model retraining and effective community reporting mechanisms are necessary.
AI-assisted content moderation is neither a cure-all nor a menace to free expression; it is a tool whose effectiveness relies on prudent regulation, institutional capability, and societal acceptance. Experiences from the European Union, India, China, and Australia illustrate that automated filters, when supported by enforced norms and transparency, significantly reduce the opportunity for the proliferation of abusive content. By enacting specific cyber harassment legislation, empowering regulators and judiciary, financing local language AI research, and prioritizing victim care, Namibia can emerge as a regional leader in secure and inclusive digital environments. The proposed roadmap presents a practical integration of technology, legislation, and social policy that realizes the potential of digital innovation while safeguarding the fundamental rights of citizens.
​
References
Bunde, E. (2021). AI-assisted and explainable hate-speech detection for social-media moderators: A design-science approach. Proceedings of HICSS-54. https://doi.org/10.24251/HICSS.2021.154
Gosztonyi, G., Gyetván, D., & Kovács, A. (2025). Theory and practice of social media’s content moderation by AI in light of the EU AI Act and Digital Services Act. European Journal of Law and Political Science, 4(1). https://doi.org/10.24018/ejpolitics.2025.4.1.165
Harkin, M. (2024). Online safety and social-media regulation in Australia. Australian Journal of Communication Policy, 10(2), 34–51. https://doi.org/10.1080/10383441.2024.2405760
Manche, R., Samaah, F., Tejaswini, T., & Myakala, P. K. (2025). Empowering safe online spaces: AI in gender-violence detection and prevention. Journal of Science and Technology, 10(2), 39–50. https://doi.org/10.2139/ssrn.5176463
Palka, P. (2024). The Digital Services Act’s red line: What the Commission can and cannot do. Journal of European Digital Law, 6(1), 21–40. https://doi.org/10.1080/17577632.2024.2362483
Prakash, A., & Bhandari, K. (2022). Evolving scope of intermediary liability in India after the 2021 IT Rules. International Review of Law, Computers & Technology, 36(3), 291–310. https://doi.org/10.1080/13600869.2022.2164838
Zhang, L. (2024). Opening the black box: China’s regulation of algorithms for content governance. Asia-Pacific Journal of Communication, 34(2), 145–162.https://doi.org/10.1080/22041451.2024.2346415
By Juliet Madamombe

Juliet Madamombe is an entrepreneur, business consultant and author based in Windhoek, Namibia. She is a Doctor of Business Administration candidate at Namibia Business School (UNAM), where her dissertation investigates China's Belt and Road Initiative (BRI) for green hydrogen and renewable energy development in Namibia: opportunities for economic growth and energy security. Juliet is the owner and founder of Phoenix Alliance Group (Pty) Ltd, Global Edge Training Institute, and Danllet Real Estate. Beyond academia, Juliet writes business columns for The New Era newspaper in Namibia. She has published an article titled Assessing Innovative Capabilities in the Namibian Road Freight Transport with the International Journal of Research & Innovation in Social Science. Find Juliet Madamombe on LinkedIn.
Disclaimer: The International Platform for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.
