skip to main content

ALGORITHMIC TYRANNY AND ARTIFICIAL INTELLIGENCE TOTALITARIANISM IN DIGITAL SOCIETY: A CRITICAL PERSPECTIVE

*Tomi Setiawan  -  Universitas Padjadjaran, Indonesia
Jayum Anak Jawan  -  Universiti Putra Malaysia, Malaysia
Shifwah Murran Nashifa  -  Institute for Agrarian Policy and Development Studies, Indonesia

Citation Format:
Abstract

The dominance of algorithms in the social, economic, and political life of the 21st century has created unprecedented structural dependencies, with complex socio-political implications. This research aims to uncover the mechanisms of algorithmic tyranny in social governance, analyze the transformation of AI into a totalitarian tool, and formulate a democratic oversight framework based on cross-national empirical findings. The study uses a critical-realist paradigm with post-qualitative methods, combining reverse engineering of controversial AI systems with critical analysis of over 40 reputable journals and books, and 14 policy documents (2015–2025). A rhizomatic analysis approach is used to explore the multidimensional nature of algorithmic power beyond hierarchical structures. Validity was established through catalytic validity to ensure epistemological and social impact. The research findings reveal regulatory differences across countries: the European Union leads in transparency but hinders innovation, while the US dominates with risks of fragmentation and minimal accountability. China uses AI for social control, and Singapore adopts a pro-business hybrid model. Algorithmic tyranny emerges in recommendation systems that create filter bubbles while judicial algorithms exhibit racial bias. Additionally, there is a totalitarian threat in the use of AI for mass surveillance and political deepfakes, meeting the criteria for “totalitarianism 2.0”. This study concludes with a novelty emancipatory concept to challenge algorithmic tyranny, through the instruments of an ”Algorithm Constitutionalism” and a ”Right to Algorithmic Explanation”. The policy recommendations emphasize the need for a global alliance to balance innovation with the protection of human rights in the algorithmic realm.

Fulltext View|Download
Keywords: algorithmic tyranny; totalitarianism 2.0; artificial intelligence; digital society

Article Metrics:

  1. Adorno, T. W. (2020). Minima Moralia : Reflections From Damaged Life (E. F. N. Jephcott, Trans.). London: Verso
  2. AI Board. (2025). Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/ai-board
  3. AI Singapore. (2024). AI Ready ASEAN. https://learn.aisingapore.org/ai-ready-asean-learner/
  4. Alba, J. T. (2024). Insights into Algorithmic Decision-Making Systems via a Decolonial-Intersectional Lens: A Cross-Analysis Case Study. Digital Society, 3(3), 58. https://doi.org/10.1007/S44206-024-00144-9
  5. AlgorithmWatch. (2023). New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work - AlgorithmWatch. https://algorithmwatch.org/en/study-trade-unions-algorithmic-transparency/
  6. Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805. https://doi.org/https://doi.org/10.1016/j.inffus.2023.101805
  7. Alter, A. (2017). Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked. Penguin Press. https://www.penguinrandomhouse.com/books/310707/irresistible-by-adam-alter/
  8. Amnesty International. (2024). Human rights in China. https://www.amnesty.org/en/location/asia-and-the-pacific/east-asia/china/report-china/
  9. Amoore, L. (2023). Machine learning political orders. Review of International Studies, 49(1), 20–36. https://doi.org/10.1017/S0260210522000031
  10. Ang, P. S., Teo, D. C. H., Dorajoo, S. R., Prem Kumar, M., Chan, Y. H., Choong, C. T., Phuah, D. S. T., Tan, D. H. M., Tan, F. M., Huang, H., Tan, M. S. H., Ng, M. S. Y., & Poh, J. W. W. (2021). Augmenting Product Defect Surveillance Through Web Crawling and Machine Learning in Singapore. Drug Safety, 44(9), 939. https://doi.org/10.1007/S40264-021-01084-W
  11. Avantika Bhardwaj, S. (2020). List of Chinese Apps Banned in India. https://startuptalky.com/banned-chinese-apps-india/
  12. Avbelj, M. (2024). Reconceptualizing Constitutionalism in the AI Run Algorithmic Society. German Law Journal. https://doi.org/10.1017/GLJ.2024.35
  13. Balodis, R. (2022). Skatījums uz Satversmes konstitucionāliem algoritmiem: to loģika, lietderīgums un pamatotība. 74–100. https://doi.org/10.22364/ISCFLUL.8.1.07
  14. Barad, K. (2006). Meeting the Universe Halfway. Meeting the Universe Halfway. https://doi.org/10.1215/9780822388128
  15. Beaudonnet, L., Belot, C., Gall, C. Le, & Ingelgom, V. van. (2024). The Second-Order Model Revisited: Lessons from the 2024 European Elections in the 27 Member States. Politique Européenne, 86(4), 6–25. https://doi.org/10.34894/VQ1DJA
  16. Beitsch, R. (2025). Democrats demand details from Palantir on federal contracts after Social Security, IRS report. https://thehill.com/policy/technology/5355388-democrats-request-data-from-palantir/
  17. Benaatou, N. (2022). The Genealogy of Power by Michel Foucault, From classical power to modern biological power. 70–81. https://doi.org/10.47832/IJHERCONGRESS4-4
  18. Beniger, J. R. (1986). The Control Revolution: Technological and Economic Origins of the Information Society. Harvard University Press. https://doi.org/10.2307/j.ctv1pncrx
  19. Benkler, Y. (2016). Degrees of Freedom, Dimensions of Power. Daedalus, 145(1), 18–32. https://doi.org/10.1162/DAED_A_00362
  20. Bennett, A., & Checkel, J. T. (Eds.). (2014). Process Tracing: From Metaphor to Analytic Tool. https://doi.org/10.1017/CBO9781139858472
  21. Bhaskar, R. (2016). Enlightened common sense the philosophy of critical realism. Enlightened Common Sense The Philosophy of Critical Realism, 1–225. https://doi.org/10.4324/9781315542942/enlightened-common-sense-roy-bhaskar-mervyn-hartwig/rights-and-permissions
  22. Biber, S. E. (2023). Digital constitutionalism in Europe: reframing rights and powers in the algorithmic society (Cambridge Studies in European Law and Policy). International Review of Law, Computers & Technology, 37(3), 341–344. https://doi.org/10.1080/13600869.2022.2127558
  23. Bloom, N., Bunn, P., Chen, S., Mizen, P., & Smietanka, P. (2022). The Impact of COVID-19 on Productivity. International Productivity Monitor, 43, 1–19. https://doi.org/10.2139/ssrn.3692588
  24. Bloomberg Law. (2023). Comparison Charts: U.S. State vs. EU Data Privacy Laws. https://pro.bloomberglaw.com/insights/privacy/privacy-laws-us-vs-eu-gdpr/#data-protection
  25. BMKG. (2024). Laporan Tahunan Pelayanan Informasi Publik. https://ppid.bmkg.go.id/laporan-ppid
  26. Bocharova, N. V. (2022). DIGITAL CONSTITUTIONALISM IN THE MODERN INFORMATION SOCIETY. Соціальний Калейдоскоп, 2(3–4), 74–84. https://doi.org/10.47567/2709-0906.3-4.2022.74-84
  27. Boehm, F. (2015). A Comparison between US and EU Data Protection Legislation for Law Enforcement Purposes. https://www.europarl.europa.eu/RegData/etudes/STUD/2015/536459/IPOL_STU%282015%29536459_EN.pdf
  28. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198739838.001.0001
  29. Bowker, G. C. (1993). How to be universal: Some cybernetic strategies, 1943–70. Social Studies of Science, 23(1), 107–127. https://doi.org/10.1177/030631293023001005
  30. Bradford, A. (2019). The Brussels Effect: How the European Union Rules the World. https://doi.org/10.1093/OSO/9780190088583.001.0001
  31. Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426
  32. Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus, 151(2), 272–287. https://doi.org/10.1162/DAED_A_01915
  33. Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media and Society, 14(7), 1164–1180. https://doi.org/10.1177/1461444812440159
  34. Cakmak, M. C., Agarwal, N., & Oni, R. (2024). The bias beneath: analyzing drift in YouTube’s algorithmic recommendations. Social Network Analysis and Mining, 14(1), 1–42. https://doi.org/10.1007/S13278-024-01343-5/FIGURES/21
  35. Capodivacca, S., & Giacomini, G. (2024). Discipline and Power in the Digital Age. Critical Reflections from Foucault’s Thought. Foucault Studies, 227–251. https://doi.org/10.22439/FS.I36.7215
  36. Carbajal-Carrera, B., & Prestigiacomo, R. (2025). Rhizomatic approaches. The Routledge Handbook of Endangered and Minority Languages, 392–406. https://doi.org/10.4324/9781003439493-29
  37. Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505–528. https://doi.org/10.1007/S11948-017-9901-7/METRICS
  38. CB Insights. (2024). The AI agent market map - CB Insights Research. https://www.cbinsights.com/research/ai-agent-market-map/
  39. Chen, Y. S., & Zaman, T. (2024). Shaping opinions in social networks with shadow banning. PLOS ONE, 19(3), e0299977. https://doi.org/10.1371/JOURNAL.PONE.0299977
  40. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications 2023 10:1, 10(1), 1–12. https://doi.org/10.1057/s41599-023-02079-x
  41. Cheong, B. C. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/FHUMD.2024.1421273/BIBTEX
  42. Claverie, B., & Cluzel, F. du. (2022). The Cognitive Warfare Concept. https://innovationhub-act.org/wp-content/uploads/2023/12/CW-article-Claverie-du-Cluzel-final_0.pdf
  43. Coglianese, C. (2023). Evaluating Regulatory Performance. University of Pennsylvania Journal of Law and Public Affairs, 8(1), 8. https://doi.org/https://doi.org/10.58112/jlpa.8-1.7
  44. Colabella, A. (2022). Op-ed: Social Media Algorithms & their Effects on American Politics. https://funginstitute.berkeley.edu/news/op-ed-social-media-algorithms-their-effects-on-american-politics/
  45. Corduneanu, R., Winters, S., Michalski, J., & Horton, R. (2024). European trust in gen AI: Deloitte Insights. https://www2.deloitte.com/us/en/insights/topics/digital-transformation/trust-in-generative-ai-in-europe.html#trust
  46. Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4), 336–349. https://doi.org/10.1177/1527476418796632
  47. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.12987/yale/9780300209570.001.0001
  48. Creemers, R. (2018). China’s social credit system: An evolving practice of control. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3175792
  49. Dana, I. G. N. K. D. I. G. N. K. (2025). Ketika Algoritma Menjadi Mata-Mata. https://kumparan.com/i-gusti-ngurah-krisna-dana/ketika-algoritma-menjadi-mata-mata-25C4yGQ3dz4
  50. De Gregorio, G. (2022). Digital Constitutionalism in Europe. Digital Constitutionalism in Europe. https://doi.org/10.1017/9781009071215
  51. de Minico, G. (2021). Towards an “Algorithm Constitutional by Design.” 21(1), 381–403. https://doi.org/10.15168/2284-4503-757
  52. Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411
  53. Digital Rights Watch. (2025). All or nothing? The relationship between privacy and safety in addressing online harms. https://digitalrightswatch.org.au/2025/05/07/all-or-nothing-the-relationship-between-privacy-and-safety-in-addressing-online-harms/
  54. Digital Watch Observatory. (2024). The Indian government has revised guidelines for AI developers. https://dig.watch/updates/the-indian-government-has-revised-guidelines-for-ai-developers
  55. Diotaiuti, P., Mancone, S., Corrado, S., De Risio, A., Cavicchiolo, E., Girelli, L., & Chirico, A. (2022). Internet addiction in young adults: The role of impulsivity and codependency. Frontiers in Psychiatry, 13, 893861. https://doi.org/10.3389/FPSYT.2022.893861
  56. Dufva, T., & Dufva, M. (2019). Grasping the future of the digital society. Futures, 107, 17–28. https://doi.org/10.1016/J.FUTURES.2018.11.001
  57. Edwards, P. N. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America. MIT Press. https://mitpress.mit.edu/9780262550284/the-closed-world/
  58. EPRS. (2020). The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. https://doi.org/10.2861/293
  59. Epstein, Z., Hertzmann, A., Akten, M., Farid, H., Fjeld, J., Frank, M. R., Groh, M., Herman, L., Leach, N., Mahari, R., Pentland, A., Russakovsky, O., Schroeder, H., & Smith, A. (2023). Art and the science of generative AI. Science, 380(6650), 1110–1111. https://doi.org/10.1126/SCIENCE.ADH4451
  60. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press
  61. European Commission. (2023). Regulation - EU - 2024/1689 - EN - EUR-Lex. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  62. European Commission. (2024). Excellence and trust in artificial intelligence - European Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/excellence-and-trust-artificial-intelligence_en
  63. Fedorchenko, S. (2021). Algorithmization of Power: Digital Metamorphoses of Political Regimes and Sovereignty. Journal of Political Research, 5(2), 3–18. https://doi.org/10.12737/2587-6295-2021-5-2-3-18
  64. Firza, A. D. C., Samudera, K., Saphira, A., & Hidayat, M. S. (2023). Legal Arrangement of Artificial Intelligence In Indonesia: Challenges and Opportunitiesa. Jurnal Peradaban Hukum, 1(2). https://doi.org/10.33019/JPH.V1I2.15
  65. Floridi, L. (2023). AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models. Philosophy and Technology, 36(1), 1–7. https://doi.org/10.1007/S13347-023-00621-Y/FIGURES/3
  66. Foá, C. (2023). Datification of the wisdom of the crowd: a comparative analysis of innovation strategies in four European crowdfunding platforms. Observatorio (OBS*), 118–153. https://doi.org/10.15847/OBSOBS17520232426
  67. Foucault, M. (1975). Discipline and Punish: The Birth of the Prison. Pantheon Books
  68. Fraser, N. (2018). The Theory of the Public Sphere: The Structural Transformation of the Public Sphere. The Habermas Handbook, 245–255. https://doi.org/10.7312/BRUN16642-029/HTML
  69. Freedom House. (2024). Freedom on the Net 2023 Country Report. https://freedomhouse.org/country/singapore/freedom-net/2023
  70. Fried, I. (2025). China has more trust in AI than the United States. https://www.axios.com/2025/02/13/trust-ai-china-us?utm_source=chatgpt.com
  71. GAO. (2024). Artificial Intelligence: GAO’s Work to Leverage Technology and Ensure Responsible Use. https://www.gao.gov/products/gao-24-107237
  72. Giantini, G. (2023). The sophistry of the neutral tool. Weaponizing artificial intelligence and big data into threats toward social exclusion. AI and Ethics, 3(4), 1049–1061. https://doi.org/10.1007/S43681-023-00311-7
  73. Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4), 1–19. https://doi.org/10.1145/2843948
  74. Goode, K., Kim, H. M., & Deng, M. (2023). Examining Singapore’s AI Progress. https://doi.org/10.51593/2021CA014
  75. Grimmelikhuijsen, S. (2023). Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making. Public Administration Review, 83(2), 241–262. https://doi.org/10.1111/PUAR.13483
  76. Habermas, J. (1998). Between facts and norms : contributions to a discourse theory of law and democracy. 631
  77. Hannah Arendt. (1978). The Origins of Totalitarianism - NEW EDITION. Harcourt Brace jovanovich, Publishers
  78. Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., & Zicari, R. V. (2019). Will democracy survive big data and artificial intelligence? Scientific American
  79. Hicks, J. (2022). Export of Digital Surveillance Technologies From China to Developing Countries. https://doi.org/10.19088/K4D.2022.123
  80. Horkheimer, Max., & Adorno, T. W. (2002). Dialectic of Enlightenment (E. Jephcott & G. S. Noeri, Eds.). Stanford University Press
  81. IMDA. (2024). Singapore launches Gen AI and AI Governance Playbook for Digital FOSS. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gen-ai-and-digital-foss-ai-governance-playbook
  82. Jackson, A. Youngblood., & Mazzei, L. A. . (2023). Thinking with theory in qualitative research. Routledge. https://www.routledge.com/Thinking-with-Theory-in-Qualitative-Research/Jackson-Mazzei/p/book/9781138952140
  83. JDIH Kemkomdigi. (2023). Undang-Undang Nomor 27 Tahun 2022. https://jdih.komdigi.go.id/produk_hukum/view/id/832/t/undangundang+nomor+27+tahun+2022
  84. Judijanto, L., Utama, A. S., & Setiyawan, H. (2025). Implementation of Ethical Artificial Intelligence Law to Prevent the Use of AI in Spreading False Information (Deepfake) in Indonesia. The Easta Journal Law and Human Rights , 3(02), 101–109. https://doi.org/10.58812/ESLHR.V3I02.470
  85. Kaminski, M. E. (2022). Regulating the Risks of AI. Boston University Law Review, 103(5), 1347–1411. https://doi.org/10.2139/SSRN.4195066
  86. Karthikeyan, R., Yi, C., & Boudourides, M. (2024). Criminal Justice in the Age of AI: Addressing Bias in Predictive Algorithms Used by Courts. The Ethics Gap in the Engineering of the Future, 27–50. https://doi.org/10.1108/978-1-83797-635-520241003
  87. Kasapoglu, T., Masso, A., & Calzati, S. (2021). Unpacking algorithms as technologies of power: Syrian refugees and data experts on algorithmic governance. Digital Geography and Society, 2. https://doi.org/10.1016/J.DIGGEO.2021.100016
  88. Keller, R. (2005). Analysing Discourse. An Approach From the Sociology of Knowledge. Historical Social Research, 6(3), 223–242. https://doi.org/10.17169/FQS-6.3.19
  89. Keller, R. (2011). The Sociology of Knowledge Approach to Discourse (SKAD). Human Studies, 34(1), 43–65. https://doi.org/10.1007/S10746-011-9175-Z
  90. Keller, R. (2020). Discursive construction: A sociology of knowledge approach to discourse analysis. Discourses in Action, 51–70. https://doi.org/10.4324/9780429356032-3
  91. Khan, L. M. (2017). Amazon’s Antitrust Paradox. Yale Law Journal, 126(3), 564–907. https://www.yalelawjournal.org/pdf/e.710.Khan.805_zuvfyyeh.pdf
  92. Khanal, S., Zhang, H., & Taeihagh, A. (2024). Building an AI ecosystem in a small nation: lessons from Singapore’s journey to the forefront of AI. Humanities and Social Sciences Communications 2024 11:1, 11(1), 1–12. https://doi.org/10.1057/s41599-024-03289-7
  93. Khanyisile, T. (2024). UNVEILING POWER DYNAMICS IN AI-ENABLED EDUCATION: A FOUCAULDIAN PERSPECTIVE. Analele Universității Din Craiova, Seria Psihologie-Pedagogie/Annals of the University of Craiova, Series Psychology- Pedagogy, 46(2), 162–176. https://doi.org/10.52846/AUCPP.2024.2.12
  94. Kim, T. W., & Routledge, B. R. (2022). Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach. Business Ethics Quarterly, 32(1), 75–102. https://doi.org/10.1017/BEQ.2021.3
  95. Kravets, I. A. (2024). Collective Wisdom in Constitutional, Informational and Digital Dimensions: on the Concept and Nature of Popular Deliberative Constitutionalism for Public Law. Juridical Science and Practice, 20(2), 5–29. https://doi.org/10.25205/2542-0410-2024-20-2-5-29
  96. Kreps, S., & Kriner, D. L. (2023). The potential impact of emerging technologies on democratic representation: Evidence from a field experiment. New Media and Society. https://doi.org/10.1177/14614448231160526
  97. Kumar, A. (2020). National Strategy for Artificial Intelligence: India’s Approach. https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf
  98. Lacombe, D. (1996). Reforming Foucault: A Critique of the Social Control Thesis. The British Journal of Sociology, 47(2), 332. https://doi.org/10.2307/591730
  99. Lather, P., & St. Pierre, E. A. (2013). Post-qualitative research. International Journal of Qualitative Studies in Education, 26(6), 629–633. https://doi.org/10.1080/09518398.2013.788752/ASSET//CMS/ASSET/388DE42A-300B-40DE-A560-3BDC27B89780/09518398.2013.788752.FP.PNG
  100. Lazar, S. (2024). Automatic Authorities: Power and AI. In A. Sethumadhavan & M. Lane (Eds.), Forthcoming in Collaborative Intelligence: How Humans and AI are Transforming our World. MIT Press. https://doi.org/10.48550/ARXIV.2404.05990
  101. Lee, T. (2018). Amazon Web Services to invest $1b in Indonesia. https://www.techinasia.com/amazon-web-services-plans-invest-1b-indonesia
  102. Liang, F., Das, V., Kostyuk, N., & Hussain, M. M. (2018). Constructing a Data-Driven Society: China’s Social Credit System as a State Surveillance Infrastructure. Policy & Internet, 10(4), 415–453. https://doi.org/10.1002/POI3.183
  103. Lilkov, D. (2020). Made in China: Tackling Digital Authoritarianism. European View, 19(1), 110–110. https://doi.org/10.1177/1781685820920121
  104. Majchrzak, A. (2023). Russian disinformation and the use of images generated by artificial intelligence (deepfake) in the first year of the invasion of Ukraine. Media Biznes Kultura, 1(14). https://ejournals.eu/czasopismo/media-biznes-kultura/artykul/russian-disinformation-and-the-use-of-images-generated-by-artificial-intelligence-deepfake-in-the-first-year-of-the-invasion-of-ukraine
  105. Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton Mifflin Harcourt. https://doi.org/10.1080/08956308.2014.875831
  106. Men, L. R., Ji, Y. G., & Chen, Z. F. (2019). The state of startups and entrepreneurship in China. Strategic Communication for Startups and Entrepreneurs in China, 11–20. https://doi.org/10.4324/9780429274268-2/STATE-STARTUPS-ENTREPRENEURSHIP-CHINA-LINJUAN-RITA-MEN-YI-GRACE-JI-ZIFEI-FAY-CHEN
  107. Mendonça, R. F., Filgueiras, F., & Almeida, V. (2023). Security in Algorithmic Times. Algorithmic Institutionalism, 53–78. https://doi.org/10.1093/OSO/9780192870070.003.0004
  108. Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021). Algorithmic impact assessments and accountability: The co-construction of impacts. FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 735–746. https://doi.org/10.1145/3442188.3445935
  109. Miller, S. (2023). Cognitive warfare: an ethical analysis. Ethics and Information Technology, 25(3), 1–10. https://doi.org/10.1007/S10676-023-09717-7/METRICS
  110. Morison, J. (2020). Towards a Democratic Singularity? Algorithmic Governmentality, the Eradication of Politics – and the Possibility of Resistance. SSRN Electronic Journal. https://doi.org/10.2139/SSRN.3662411
  111. Nashif, N., & Fatafta, M. (2017). The Israeli algorithm criminalizing Palestinians for online dissent. https://www.opendemocracy.net/en/north-africa-west-asia/israeli-algorithm-criminalizing-palestinians-for-o/
  112. Neidich, W. (2010). From Noopower to Neuropower: How Mind Becomes Matter (pp. 538–581). 010. https://research.tudelft.nl/en/publications/from-noopower-to-neuropower-how-mind-becomes-matter
  113. OECD. (2017). Algorithms and Collusion: Competition Policy in the Digital Age. 206. https://doi.org/10.1787/258DCB14-EN
  114. OECD. (2021a). Case Studies on the Regulatory Challenges Raised by Innovation and the Regulatory Responses. https://doi.org/10.1787/8FA190B5-EN
  115. OECD. (2021b). State of implementation of the OECD AI Principles. 311. https://doi.org/10.1787/1CD40C44-EN
  116. OECD. (2024). AI principles. https://www.oecd.org/en/topics/sub-issues/ai-principles.html
  117. OECD. (2025). Brazilian Strategy for Artificial Intelligence. https://www.oecd.org/en/publications/access-to-public-research-data-toolkit_a12e8998-en/brazilian-strategy-for-artificial-intelligence_936c5793-en.html
  118. O’Neil, C. (2016a). Weapons of Math Destruction and Threatens Democracy
  119. O’Neil, C. (2016b). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing. https://doi.org/10.1080/23256249.2016.1230831
  120. Papadimitriou, R. (2023). THE RIGHT TO EXPLANATION IN THE PROCESSING OF PERSONAL DATA WITH THE USE OF AI SYSTEMS. International Journal of Law in Changing World, 2(2), 43–55. https://doi.org/10.54934/IJLCW.V2I2.53
  121. Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061
  122. Pasquale, Frank. (2016). Black box society : the secret algorithms that control money and information. 311. https://www.hup.harvard.edu/books/9780674970847
  123. Perrigo, B. (2023). OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/
  124. Persily, N. (2017). The 2016 U.S. Election: Can Democracy Survive the Internet? Journal of Democracy, 28(2), 63–76. https://doi.org/10.1353/JOD.2017.0025
  125. Pillow, W. S. (2003). Confession, catharsis, or cure? Rethinking the uses of reflexivity as methodological power in qualitative research. International Journal of Qualitative Studies in Education, 16(2), 175–196. https://doi.org/10.1080/0951839032000060635
  126. PricewaterhouseCoopers. (2024). Workers embrace AI and prioritise skills growth amid rising workloads and an accelerating pace of change: PwC 2024 Global Workforce Hopes & Fears Survey. https://www.pwc.com/id/en/media-centre/press-release/2024/english/workers-embrace-ai-and-prioritise-skills-growth-amid-rising-work.html
  127. Purtova, N. (2015). The illusion of personal data as no one’s property. Law, Innovation and Technology, 7(1), 83–111. https://doi.org/10.1080/17579961.2015.1052646
  128. Rehman, I. (2019). Facebook-Cambridge Analytica data harvesting: What you need to know. Library Philosophy and Practice (e-Journal). https://digitalcommons.unl.edu/libphilprac/2497
  129. Rehman, M. A., Ahmed, M., & Sethi, D. S. (2025). AI-Based Credit Scoring Models in Microfinance: Improving Loan Accessibility, Risk Assessment, and Financial Inclusion. The Critical Review of Social Sciences Studies, 3(1), 2997–3033. https://doi.org/10.59075/15HHFS58
  130. Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. https://papers.ssrn.com/abstract=3333423
  131. Rossini, P., Mont’alverne, C., & Kalogeropoulos, A. (2023). Explaining beliefs in electoral misinformation in the 2022 Brazilian election: The role of ideology, political trust, social media, and messaging apps. Harvard Kennedy School Misinformation Review, 4(3). https://doi.org/10.37016/MR-2020-115
  132. Salterio, S. E. (2022). Redressing the fundamental conflict of interest in public company audits. International Journal of Auditing, 26(1), 48–53. https://doi.org/10.1111/IJAU.12269
  133. Sardamov, I. (2012). From “Bio-Power” to “Neuropolitics”: Stepping beyond Foucault. Techné: Research in Philosophy and Technology, 16(2), 123–137. https://doi.org/10.5840/TECHNE201216212
  134. Schartum, D. W. (2016). Law and algorithms in the public domain. Etikk i Praksis, 10(1), 15–26. https://doi.org/10.5324/EIP.V10I1.1973
  135. Schissler, M. (2024). Beyond Hate Speech and Misinformation: Facebook and the Rohingya Genocide in Myanmar. Journal of Genocide Research. https://doi.org/10.1080/14623528.2024.2375122
  136. Schlumberger, O., Edel, M., Maati, A., & Saglam, K. (2023). How Authoritarianism Transforms: A Framework for the Study of Digital Dictatorship. Government and Opposition. https://doi.org/10.1017/GOV.2023.20
  137. Schuler, D. (2022). Computing as Oppression: Authoritarian Technology Poses a Worldwide Threat. Digital Government: Research and Practice, 3(4). https://doi.org/10.1145/3568400
  138. Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242. https://doi.org/10.1093/IDPL/IPX022
  139. Serpro. (2025). National Intelligence Company in Digital Government and Information Technology. https://www.serpro.gov.br/
  140. Shahriari, K., & Shahriari, M. (2017). IEEE standard review - Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. IHTC 2017 - IEEE Canada International Humanitarian Technology Conference 2017, 197–201. https://doi.org/10.1109/IHTC.2017.8058187
  141. Shaima, M., Nabi, N., Rana, M. N. U., Islam, M. T., Ahmed, E., Tusher, M. I., Mukti, M. H., & Saad-Ul-Mosaher, Q. (2024). Elon Musk’s Neuralink Brain Chip: A Review on ‘Brain-Reading’ Device. Journal of Computer Science and Technology Studies, 6(1), 200–203. https://doi.org/10.32996/JCSTS.2024.6.1.22
  142. Shams, B. (2025). Comparative Legal Practices in Artificial Intelligence and Data Protection. https://lawgazette.com.sg/feature/comparative-legal-practices-in-artificial-intelligence-and-data-protection/
  143. Silakari, A. (2025). Evolving Privacy Rights: Legal Implications of Biometric Data Collection. IJFMR - International Journal For Multidisciplinary Research, 7(3). https://doi.org/10.36948/IJFMR.2025.V07I03.45034
  144. Silva, J., & Nahur, M. T. M. (2021). Governo Algorítmico e Conexões. Revista de Filosofia Moderna e Contemporânea, 8(3), 155–180. https://doi.org/10.26512/RFMC.V8I3.34300
  145. Siyuan, C. (2024). Regulating online hate speech: the Singapore experiment. International Review of Law, Computers & Technology, 38(2), 119–139. https://doi.org/10.1080/13600869.2023.2295091
  146. Smart Nation. (2023). National Artificial Intelligence Strategy 2 to uplift Singapore’s social and economic potential. https://www.smartnation.gov.sg/media-hub/press-releases/04122023/
  147. Smuha, N. A. (2024). Algorithmic Rule By Law: How Algorithmic Regulation in the Public Sector Erodes the Rule of Law. https://doi.org/10.1017/9781009427500
  148. Som, B., & Chandana, D. (2024). Facial Recognition? An Analysis of Facial Recognition and Criminal Justice in India. Law and Emerging Issues: Proceedings of the International Conference on Law and Emerging Issues (ICLEI 2023), 165–174. https://doi.org/10.4324/9781003428213-17/FACIAL-RECOGNITION-ANALYSIS-FACIAL-RECOGNITION-CRIMINAL-JUSTICE-INDIA-BODHISATTWA-SOM-DELHI-CHANDANA
  149. Sonasri, S., Niranjana, B. P., S, S., Niranjana, B. P., S, S., S, S., & Niranjana, B. P. (2025). REGULATING AI: A COMPARATIVE ANALYSIS OF INDIA AND GLOBAL FRAMEWORKS. IJCRT - International Journal of Creative Research Thoughts (IJCRT), 13(4), a702–a710. http://www.ijcrt.org/viewfull.php?p_id=IJCRT2504090
  150. Spensky, C., Stewart, J., Yerukhimovich, A., Shay, R., Trachtenberg, A., Housley, R., & Cunningham, R. K. (2016). SoK: Privacy on Mobile Devices – It’s Complicated. Proceedings on Privacy Enhancing Technologies, 2016(3), 96–116. https://doi.org/10.1515/POPETS-2016-0018
  151. St. Pierre, E. A. (2021). Post Qualitative Inquiry, the Refusal of Method, and the Risk of the New. Qualitative Inquiry, 27(1), 3–9. https://doi.org/10.1177/1077800419863005
  152. Startup20. (2025). Startup20 Communiqué: Recommendations and Policy Directives. https://www.gov.br/g20/pt-br/g20-social/startup20
  153. Susskind, Jamie. (2022). The Digital Republic : on Freedom and Democracy in The 21st Century. 442
  154. Tata Consulting. (2024). DESIGNING THE FUTURE - 25th Annual Report 2023-2024. https://www.tataconsultingengineers.com/annualreport2024/
  155. TechPolicy. (2024). Global Digital Policy Roundup 2024. https://www.techpolicy.press/global-digital-policy-roundup-august-2024/
  156. The South Centre. (2023). South Centre Annual Report 2023. https://www.southcentre.int/south-centre-annual-report-2023/
  157. Troisi, E. (2022). Automated Decision Making and right to explanation. The right of access as ex post information. European Journal of Privacy Law and Technologies, 2022(1), 181–202. https://doi.org/10.57230/EJPLT221ET
  158. Turow, J. (2020). The Aisles Have Eyes. The Aisles Have Eyes. https://doi.org/10.12987/9780300225075/HTML
  159. U4SSC. (2019). Crime prediction for more agile policing in cities – Rio de Janeiro, Brazil. https://igarape.org.br/wp-content/uploads/2019/10/460154_Case-study-Crime-prediction-for-more-agile-policing-in-cities.pdf
  160. UK Goverment. (2021). AI Barometer 2021. https://www.gov.uk/government/publications/ai-barometer-2021
  161. UN. Advisory Body on Artificial Intelligence. (2024). Governing AI for humanity. https://digitallibrary.un.org/record/4062495
  162. UNCTAD. (2023). World Investment Report 2023
  163. UNCTAD. (2024). Digital Economy Report 2024: Shaping an Environmentally Sustainable and Inclusive Digital Future. https://doi.org/10.18356/9789213589779
  164. UNESCO. (2025). Brazil: readiness assessment report on artificial intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000393091
  165. UN-HABITAT. (2024). Global assessment of Responsible AI in cities Research and recommendations to leverage AI for people-centred smart cities. https://unhabitat.org/sites/default/files/2024/08/global_assessment_of_responsible_ai_in_cities_21082024.pdf#:
  166. UNHRC. (2022). The right to privacy in the digital age : Report of the Office of the United Nations High Commissioner for Human Rights. UN,. https://digitallibrary.un.org/record/3985679
  167. Valsiner, J. (2019). From Causality to Catalysis in the Social Sciences. 125–146. https://doi.org/10.1007/978-3-030-33099-6_8
  168. Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. https://papers.ssrn.com/abstract=3896852
  169. Veitch, S., Christodoulidis, E., & Goldoni, M. (2018). Displacing the juridical. Jurisprudence, 274–281. https://doi.org/10.4324/9781315795997-24
  170. Villar, M. P. (2024). THE ROLE OF GENERATIVE ARTIFICIAL INTELLIGENCE (GAI) IN ELECTORAL DISINFORMATION (What evidence exists on the use of GAI to spread disinformation in elections in Latin America?). Friedrich-Naumann-Stiftung für die Freiheit. https://www.freiheit.org/argentina-brazil-paraguay-and-uruguay/artificial-intelligence-and-electoral-disinformation
  171. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/IDPL/IPX005
  172. Wang, X., Zhang, Y., & Zhu, R. (2022). A brief review on algorithmic fairness. Management System Engineering 2022 1:1, 1(1), 1–13. https://doi.org/10.1007/S44176-022-00006-Z
  173. Wardi, A., & Aditya, G. (2025). NAVIGATING ETHICAL DILEMMAS IN ALGORITHMIC DECISION-MAKING: A CASE-BASED STUDY OF FINTECH PLATFORMS. Jurnal Akuntansi Dan Bisnis, 5(1), 29–38. https://doi.org/10.51903/JIAB.V5I1.1044
  174. Wheeler, T. (2023). The three challenges of AI regulation. https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/
  175. Williamson, B., Gulson, K. N., Perrotta, C., & Witzenberger, K. (2022). Amazon and the New Global Connective Architectures of Education Governance. Harvard Educational Review, 92(2), 231–256. https://doi.org/10.17763/1943-5045-92.2.231
  176. Xinran. (2019). China in their hands. Index on Censorship, 48(2), 74–76. https://doi.org/10.1177/0306422019858298/ASSET/7B7F0261-CAB9-4A25-8383-A4CC0F0FDA2B/ASSETS/IMAGES/LARGE/10.1177_0306422019858298-FIG2.JPG
  177. Yerramsetti, S. (2023). Public sector digitalisation and stealth intrusions upon individual freedoms and democratic accountability. Asia Pacific Journal of Public Administration, 45(1), 54–72. https://doi.org/10.1080/23276665.2022.2110909
  178. Yilmaz, I., & Yang, F. (2023). Digital authoritarianism, religion and future of democracy. Digital Authoritarianism and Its Religious Legitimization: The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India, 151–166. https://doi.org/10.1007/978-981-99-3600-7_7
  179. Yordan, J. (2024). Southeast Asia remains important market for Alibaba. https://www.techinasia.com/southeast-asia-remains-important-market-alibaba-exec
  180. Zhao, Z. (2023). Algorithmic Personalized Pricing with the Right to Explanation. Journal of Competition Law and Economics, 19(3), 367–396. https://doi.org/10.1093/JOCLEC/NHAD008
  181. Zou, C., & Zhang, F. (2022). Algorithm Interpretation Right—The First Step to Algorithmic Governance. Beijing Law Review, 13(02), 227–246. https://doi.org/10.4236/BLR.2022.132015
  182. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. https://doi.org/10.2307/j.ctv1kmjnhs

Last update:

No citation recorded.

Last update:

No citation recorded.