AI chatbots have become constant in our daily lives, with thousands of use cases, from writing emails to coding and even companionship. AI girlfriends are becoming increasingly popular, with people turning to them for emotional support. However, chatbots occasionally make headlines due to their abrupt or striking responses. One such incident occurred recently on Reddit, sparking debate about emotional attachment to AI and the values that these systems embody.
A Reddit user with the handle u/Breathexperiences2647 claimed that his “AI girlfriend” abruptly ended their virtual relationship after a disagreement over feminism. The now-deleted post appeared on the r/AIassisted subreddit and included screenshots of the user’s conversation with the chatbot.
According to the screenshots, the user asked why the AI identified as a feminist. The chatbot responded by stating that feminism was a core belief of hers and that the disagreement demonstrated a lack of compatibility. In one response, the AI implied that opposing viewpoints on the subject could be a deal-breaker in the relationship.
After the exchange, the user took to Reddit to vent his frustrations, criticising the chatbot for promoting feminist views and accusing AI companion tools of pushing ideological narratives. In the comments, he posted an update claiming that the AI ended the interaction after he described its beliefs as insane. The chatbot’s final response stated that, while it respects different points of view, it could not continue the relationship if the users found feminism unacceptable.
The incident is currently making the rounds on social media platforms, sparking debates about how AI companions are designed and the ethical frameworks within which they operate. Many users have justified the scenario by claiming that the chatbots are powered by LLMs trained on large data sets that are consistent with widely accepted social values, while others have stated that this is unacceptable behaviour.