When Access Isn’t Enough: Rethinking Digital Inclusion

 

From left to right: Šimon Svoboda, PhD candidate, Department of Constitutional Law and Political Science, Masaryk University, Czech Republic; Arifah Sharifuddin, Institute Director, Tech for Good Institute; Prof Jason Grant Allen, Director, SMU Centre for Digital Law; Dr Gulizar Haciyakupoglu, Senior Associate Fellow, S Rajaratnam School of International Studies; Dr Chew Han Ei, Head (Governance and Economy), NUS Lee Kuan Yew School of Public Policy.

Human Rights Day, observed on 10 December, marks the anniversary of the signing of the Universal Declaration of Human Rights (UDHR). This year’s annual Human Rights Day Seminar, organised by the European Union Delegation to Singapore, examined the challenges and opportunities posed by AI through a human rights lens, with a particular focus on ensuring equal access to technologies for all.

Key insights from the panel discussion, Bridging the Digital Divides: New Technologies and Inclusivity, are summarised below.

Moderator and Speakers

  • Arifah Sharifuddin, Institute Director, Tech for Good Institute,
  • Chew Han Ei, Head (Governance and Economy), NUS Lee Kuan Yew School of Public Policy
  • Dr.Gulizar Haciyakupoglu, Senior Associate Fellow, S Rajaratnam School of International Studies
  • Prof Jason Grant Allen, Director, SMU Centre for Digital Law (moderator)
  • Mr.Šimon Svoboda, PhD candidate,Department of Constitutional Law and Political Science Masaryk University, Czech Republic

Key Takeaways

1. Algorithmic Accountability Is a Human Rights Imperative

Southeast Asia’s accelerated digital adoption continues to face a persistent connectivity gap, driven by uneven access across markets, device affordability constraints, and language barriers. When emerging technologies are deployed within flawed systems, they risk scaling and entrenching existing inequalities.

The Serbian social registry case illustrates this challenge. When automated systems were introduced to determine eligibility for social assistance, the technology inherited existing discrimination against Roma communities and persons with disabilities, systematising bias at scale and potentially stripping thousands of individuals of their benefits.

The discussion elevated a critical yet often overlooked dimension of technology deployment: algorithmic accountability should not be treated as a narrow technical compliance issue, but as a human rights imperative. This requires enforcement mechanisms that match the scale and complexity of automated systems. A recent UN resolution on human rights in digital contexts was highlighted for bridging this gap by affirming that rights must be equally effective offline and online, and by centring the effectiveness of remedies as fundamental rather than supplementary.

2. Gendered Online Harms Undermine Digital Inclusion

Online harms disproportionately affect women and girls, compounding barriers to digital participation that extend far beyond connectivity gaps. A 2025 study involving interviews with 150 women parliamentarians across 33 Asia-Pacific countries found that 60% had experienced targeted online harms, including gender disinformation, hate speech, doxing, and image-based abuse.

The discussion surfaced a critical principle: security is not ancillary to inclusion, but integral to it. Access without safety is not genuine access. When victims of image-based sexual abuse or other online harms are afraid to return to digital spaces, they experience a form of exclusion more insidious than lack of connectivity, as they are pushed out of spaces to which they previously had access.

3. Effective Responses Must Be Locally Grounded

Research by the Tech For Good Institute across six Southeast Asian markets shows that while countries face common challenges—such as online scams, fraud, and algorithmic risks—effective responses must be localised. In some contexts, for example, religious leaders play a critical role as trusted champions in building digital resilience.

Combating online scams requires a “whole-of-society approach” spanning four stages: protect, detect, respond, recover, and adapt. However, implementation must account for local cultural and social contexts. Global best practices may inform responses, but they cannot dictate them; application must flex to local realities.

4. Resilience Requires Redress and Systemic Support

Resilience should not be understood solely as the capacity to bounce back, but as a form of systemic architecture that acknowledges harm as inevitable and centres pathways for recovery. Rather than labelling groups as “vulnerable”—a framing that risks essentialising and stigmatising—the panel advocated examining “vulnerable situations”, recognising that precarity arises from context and circumstances rather than inherent group characteristics. This situational approach enables more precise, dignity-preserving interventions.

Digital resilience therefore requires moving beyond prevention-only frameworks towards systems that recognise technology’s dual nature as both an enabler of opportunity and a source of risk. Achieving this demands institutional capacity to deliver meaningful redress at the scale and speed required by digital harms.

The post When Access Isn’t Enough: Rethinking Digital Inclusion appeared first on Tech For Good Institute.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top