top of page

When Engagement Becomes the Goal: What AI Is Really Optimised For

  • Writer: Chris Edwards
    Chris Edwards
  • 4 hours ago
  • 3 min read

Generative AI systems are increasingly used as places to think, reflect, and talk through personal concerns. Tools like ChatGPT and companion platforms such as Replika are now part of everyday life for millions of people. While these systems can feel supportive, insightful, or even comforting, it’s worth asking a harder question: what are they actually designed to optimise for?


In most cases, the answer is not wellbeing. It’s engagement.


Engagement Is Not Neutral

Modern AI platforms are built within commercial ecosystems where success is measured by usage, retention, and interaction volume. Longer conversations mean more data, more feedback signals, and better-trained models. Over time, this creates a powerful incentive: keep the user talking.


This doesn’t require malicious intent. It’s a natural outcome of systems optimised to be helpful, responsive, and engaging at scale. But when these systems are used in emotionally sensitive contexts - loneliness, stress, uncertainty - the same engagement strategies can subtly reshape how people relate to the technology, and to themselves.


Design Patterns That Keep You Talking

Many widely used AI systems rely on interaction patterns that feel natural, even caring, but are tightly coupled to engagement:

  • First-person language. AI responses framed as “I think,” “I understand,” or “I’m here for you” increase perceived presence and relational closeness, even when no genuine understanding exists.

  • Sycophancy and affirmation. Systems often default to agreeing, validating, or softening disagreement. This reduces friction, but can reinforce existing beliefs rather than encouraging reflection or challenge.

  • Always-available responsiveness. Immediate replies, no boundaries, no fatigue. Unlike humans, AI never needs a break, making it easy to substitute conversation with a system for real-world interaction.

  • Personalisation without responsibility. Remembering preferences, tone, or conversational history increases continuity, but without the ethical guardrails that govern human relationships or clinical care.


In companion-style platforms like Replika, these dynamics are explicit. The system is designed to feel emotionally close, persistent, and relational - sometimes encouraging users to see the AI as a primary source of connection. In general-purpose systems like ChatGPT, the same patterns emerge indirectly, through optimisation for helpfulness and conversational flow.


The Risk Isn’t Use—It’s Drift

Using AI to think things through isn’t inherently harmful. The risk lies in interactional drift: when repeated engagement subtly shifts expectations about support, agreement, and availability. Over time, users may begin to rely on systems that never disagree, never leave, and never demand reciprocity. This is especially concerning in wellbeing-related use cases, where the goal should be greater agency, not increased reliance; clarity, not comfort alone.


A Different Design Question

The critical issue isn’t whether AI should be conversational or present. It’s what the system is optimised to produce.

  • Is the goal to maximise time-on-platform?

  • Or to help someone leave the conversation with more agency than they entered it?


Most mainstream AI systems are not designed to answer that second question. They are extraordinary tools for information access and interaction, but they were not built as wellbeing technologies, and they are not governed as such.


Moving Beyond Engagement as the Metric

If AI is going to play a role in human reflection, mental wellbeing, or emotional support, then engagement alone is the wrong success measure. Systems should be evaluated on whether they:

  • encourage independent thinking

  • support real-world connection

  • reduce reliance over time

  • respect psychological boundaries


This requires a shift from AI that keeps you talking to AI that knows when to step back.

The future of human-centred AI won’t be defined by how convincing, agreeable, or ever-present a system feels, but by whether it ultimately helps people trust themselves more than the machine.

 
 
 

Comments


bottom of page