top of page

Podcast: Are AI Chatbots in Mental Health Support Truly Inclusive for Everyone?

  • Writer: Chris Edwards
    Chris Edwards
  • Sep 28, 2025
  • 2 min read


AI chatbots are becoming a common tool in mental health support, offering users quick access to emotional help anytime. But are these digital helpers designed to serve everyone equally? In a recent podcast episode, host Chris Rhyss Edwards speaks with psychologist and researcher Dr. Gale Lucas about the blind spots in chatbot design that can leave many users feeling misunderstood or excluded. Listen on Spotify.


The Challenge of Cultural Bias in AI Chatbots


One major issue is that many chatbots rely on data that reflects a narrow cultural perspective. This means the AI may not recognize or respond appropriately to the experiences of people from diverse backgrounds. For example, emotional expressions and coping mechanisms vary widely across cultures. A chatbot trained mostly on Western data might misinterpret or overlook these differences, leading to responses that feel irrelevant or even alienating.


Dr. Lucas highlights how this bias can discourage users from opening up, which defeats the purpose of mental health support. When chatbots fail to understand cultural context, they risk reinforcing feelings of isolation rather than easing them.


Why One-Size-Fits-All Emotional Responses Fall Short


Many chatbots use preset emotional responses designed to fit a broad audience. While this approach simplifies programming, it often results in generic replies that lack depth or personalization. Users may feel like they are talking to a machine that only hears words but does not truly understand their feelings.


Dr. Lucas points out that emotional support requires more than just listening—it demands empathy and recognition of individual experiences. AI that cannot adapt its responses to the nuances of each user’s situation misses the mark on meaningful connection.


Insights from the Ellie Project and Digital Disclosure Research


Dr. Lucas shares insights from her work on the Ellie project, which focuses on creating AI that can better interpret human emotions and encourage honest self-disclosure. The project uses advanced techniques to analyze facial expressions, tone, and language, aiming to build chatbots that respond with greater sensitivity and accuracy. Listen on Spotify.


Her research into digital disclosure also reveals that people are more likely to share personal information when they feel understood and safe. This means that inclusive design is not just a technical challenge but a critical factor in building trust between users and AI.


What True Inclusivity Means in Digital Wellbeing


True inclusivity in mental health chatbots means designing systems that recognize and respect diverse cultural backgrounds, emotional expressions, and personal experiences. It requires:


  • Using diverse datasets that reflect a wide range of users

  • Developing adaptive response models that personalize interactions

  • Testing chatbots with varied user groups to identify and fix blind spots


Getting this right can improve user engagement and outcomes. Getting it wrong risks causing harm by alienating those who need support most.


Moving Forward with Inclusive AI Design


Building AI chatbots that don’t just listen but understand is a complex task. It involves collaboration between psychologists, researchers, and technologists committed to ethical and inclusive design. As Dr. Lucas emphasizes, the goal is to create digital tools that support mental health in ways that feel genuine and accessible to everyone.


For those interested in learning more, the full conversation between Chris Rhyss Edwards and Dr. Gale Lucas is available on Spotify here.


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page