OpenAI users are getting emotionally attached to ChatGPT voice assistant, and it’s risky
OpenAI has flagged some concerns about how voice mode might impact the users.
OpenAI has released its findings through a comprehensive technical document known as a System Card.
As per the System Card for GPT 4o, there is a range of risks associated with the model.
OpenAI recently launched its ChatGPT 4o Voice mode. It hasn’t been a long time since it was unveiled and now there seem to be some concerns about it. OpenAI has flagged some concerns about how this feature might impact the users. Concerns about how AI is integrated into a person’s life with this new feature have come forth.
OpenAI has tested the feature under different settings and it has released its findings through a comprehensive technical document known as a System Card. Here, the company has spoken about the potential dangers of users becoming emotionally attached to AI, as well as other security vulnerabilities.
Also read: Scarlett Johansson is not happy about ‘Her’ voice in ChatGPT, OpenAI pauses the feature
Why users getting emotionally attached to ChatGPT voice assistant is risky?
As per the System Card for GPT 4o, there is a range of risks associated with the model, including the potential to exacerbate societal biases, spread misinformation, and even aid in the creation of harmful biological or chemical agents. In the same report, OpenAI has also shared the results of its extensive testing to prevent the AI from escaping its constraints, engaging in deceptive behaviour, or developing harmful plots.
A phenomenon of “Anthropomorphization and Emotional Reliance,” was highlighted in the report. What happens here is that users may attribute human-like qualities to AI, particularly when it speaks in a human-like voice. This could lead to users forming emotional bonds with the AI, reducing their need for human interaction and possibly affecting healthy relationships.
Even the OpenAI researchers noted instances of users expressing sentimental attachments to the AI, such as saying, “This is our last day together.” There can be an emotional dependence on AI.
As per the company, there’s also a risk of “jailbreaking” the Voice mode through audio inputs, which could allow the model to bypass safeguards and produce unintended outputs. With this the AI can mimic specific voices, interpret emotions, or even adopt the user’s voice. This raises concerns about privacy and security.
To keep a track of all this, OpenAI has implemented various safety measures and mitigation strategies throughout the development and deployment of GPT 4o. But do you think we are really ready for this sort of AI revolution?
Mustafa Khan
Mustafa is new on the block and is a tech geek who is currently working with Digit as a News Writer. He tests the new gadgets that come on board and writes for the news desk. He has found his way with words and you can count on him when in need of tech advice. No judgement. He is based out of Delhi, he’s your person for good photos, good food recommendations, and to know about anything GenZ. View Full Profile