ASJSR

American Scholarly Journal for Scientific Research

The Role of Chatbots in Building a More Accessible Society

By Harshita Mohapatra ·
The Role of Chatbots in Building a More Accessible Society

The Role of Chatbots in Building a More Accessible Society

In recent times, there has been a significant growth in the advancements of chatbot technology that have raised the question about their role in addressing social challenges. Chatbots, automated systems that aim to simulate human conversation, are now common in health apps, education platforms, public information portals and even in mental health services. They process requests at an incredible speed and offer high levels of personalisation, making them useful for solving problems at a larger scale. However, they fall short when it comes to imitating human qualities like empathy, ethical judgment and awareness of context. To use them responsibly one needs to recognise both their strengths and their limits. This essay argues that while chatbots expand access to knowledge, services and public awareness, they also require a certain level of oversight because they cannot manage complex human experiences on their own without the support of humans.

Chatbots excel at providing accurate, accessible information. Health organisations use them to explain symptoms, effects of medication and preventive care in clear and easy to understand language. Legal services deploy them to guide you through filing taxes or applying for government benefits. For underserved or groups that have a lower literacy rate, chatbots simplify technical terms and reduce barriers to accessing essential knowledge. (Haque et al., 2023).

Education shows the same value. Chatbots can also act as on demand tutors by giving feedback, exercises and explanations, all being tailored to a child’s specific learning style. They support students in crowded classrooms or those who cannot afford to get private help. Khan Academy’s Khanmigo demonstrates how chatbots deliver interactive learning support across subjects. By widening access to resources, chatbots also reduce inequality in education and extend opportunities extending beyond the four walls of traditional classrooms.

Chatbots also improve how support services can scale. Human workers spend a lot of time answering repetitive questions, checking application statuses and giving service details or directing people to resources, a lot of this work is repetitive and requires manual work. Chatbots have the capacity to complete these tasks quickly and consistently, leaving human staff available to focus on the more complex cases that chatbots are unequipped for.

Government agencies, NGOs and businesses already use this approach. For example, a benefits office uses chatbots to answer routine questions so that staff can focus on applicants with unusual needs. Humanitarian organisations deploy chatbots during crises to direct people to food or medical sites, by automating routine tasks, chatbots ensure more people get timely support even when resources are stretched thin.

Chatbots also play an important and growing role in providing mental health support. Tools like Woebot offer mindfulness exercises, stress reduction techniques and nonjudgmental listening and these can be accessed and used very easily online. They remove barriers for those people who feel anxious about going for formal therapy and encourage people to take first steps toward receiving professional help.

This matters especially in the communities where mental health care is scarce or stigmatised. Imagine a young adult in a rural area who confides in a chatbot before reaching out to a counselor. Chatbots can normalise conversations and encourage people to take help early. Still, you cannot treat them as replacements for therapists because they lack the depth, empathy and crisis management skills that trained professionals provide, they are only an initial first step towards treatment and not really a long term solution. Using them as stand-ins during emergencies can risk causing serious harm.

Chatbots also improve accessibility. Features like text-to-speech, speech-to-text and navigation support help people with disabilities interact more easily with digital environments. Someone with visual impairments can have a chatbot read website content aloud while someone with mobility challenges can use voice commands to complete tasks.

Inclusivity extends further as chatbots also give real-time translation, breaking language barriers for migrants and refugees. They also simplify complex instructions, helping people with limited literacy understand medical advice or fill out forms that they don't understand. By removing these barriers, chatbots let people have access to vital services and help create fairer opportunities for participation in a well-rounded society.

Organisations use chatbots to run awareness campaigns and rallies. During the COVID-19 pandemic, chatbots offered real time updates on symptoms, safety rules and vaccination sites that reached millions of people faster than call centers powered by humans could have.

Beyond just health services, chatbots boost civic and environmental engagement. Election commissions use them to explain registration and voting procedures. Environmental groups send reminders about recycling or share daily tips on sustainability. You get tailored messages that fit your needs while organisations are able to reach a wider audience than what traditional outreach allows them to.

Chatbots cannot replace human empathy. Even when they are programmed to provide supportive responses, their interactions can often feel flat or insincere more often than not. True empathy requires awareness of a person’s personal history, their culture, context and unspoken cues, things chatbots cannot accurately capture and understand. For sensitive issues, people often trust humans more than a chatbot. Chatbots manage cognitive empathy, summarising or recognising your concerns, but not affective empathy, the ability to share and respond emotionally. In grief counseling or trauma support, this gap makes them unsuitable for providing the needed support. Without genuine connection, trust breaks down and users feel unsupported in times of crises, times when they need support the most.

Complex cases also expose chatbot limits with many social issues involving ethical judgment, cultural sensitivity or emotional understanding. Dispute mediation, crisis support or moral dilemmas require insight that goes beyond factual responses. For example, a chatbot advising a victim of domestic violence may give generic advice that can further endanger the individual rather than helping them. In legal or emergency settings, this lack of depth and adaptability makes reliance on chatbots risky because you need human judgment to ensure required levels of safety and fairness.

Things like automation can allow chatbots to inherit biases from training data and can reproduce stereotypes, mislead users or miss signs of crisis. If unsupervised and left completely to automation, they harm the very people they aim to support them. To prevent this, human oversight and safeguards are needed. A mental health chatbot should automatically direct users that are in crisis to hotlines or professionals instead of trying to handle the situation on its own. Government agencies must review responses for accuracy and fairness. Research shows chatbots increase participation in surveys because they feel less judgmental than humans but they fail to probe deeply or adapt questions. Without monitoring, important needs are not met because of these gaps.

Chatbots change how societies deliver and use services, share knowledge and include marginalised groups. They open access to education for more people, support mental health outreach and power public awareness campaigns. Yet they cannot manage empathy, complex judgment or decisions or nuanced human needs.

The way forward is to treat chatbots as partners and helpers, not as complete replacements to human services. When they are paired with human oversight, they have the power to improve efficiency, widen access and reduce inequality. Without safeguards, they risk simplifying complex problems that require real human care. If you recognise both their strengths and their limits, you can use chatbots in ways that maximise their benefits and potential while also protecting the dignity and trust of real humans.

Citations

Picture Credits: Chatbots for Social Media