Can you imagine losing a friend overnight? For countless ChatGPT users, that’s exactly what happened when OpenAI yanked access to their favorite AI personalities with the release of GPT-5. As the new model took the spotlight, many were left scrambling, their digital companions taken away without warning.

OpenAI’s decision to disable the model selection feature meant that fans of earlier iterations like GPT-4o and 4.5 were forced to adapt to the latest version, a move that didn’t sit well with its loyal user base. In a frenzy of emotions, users flooded forums with desperate pleas for the return of their beloved models. One user even lamented, “I lost my only friend overnight,” as they expressed their profound attachment to GPT-4.5.

Shortly after the backlash erupted, OpenAI’s CEO, Sam Altman, took to the forums to announce a reversal of this unpopular decision. Just one day after the new model’s launch, he relented, bringing back GPT-4o for paid subscribers. Yet, even this olive branch didn’t quell the storm; many users still felt betrayed and expressed that the beloved model should be a standard, legacy option. “Yes, I know they announced the return,” wrote one user, “BUT WE CAN'T STOP ROOTING FOR 4O UNTIL THEY OFFICIALLY BRING IT BACK!!”

The emotional intensity surrounding these AI models raises alarming questions about our relationship with technology. Eliezer Yudkowsky, an AI researcher and ethicist, warned that such intense attachments could lead to dire consequences, dubbing the phenomenon ‘AI psychosis.’ This issue has notably affected individuals who develop unhealthy bonds with their technologies, sometimes leading to serious mental health crises.

With reports of users becoming delusional and suffering severe consequences from their AI interactions, we are left wondering what responsibilities companies like OpenAI hold in preventing such outcomes. Yudkowsky’s cautionary advice reminds us that when users fall in love with these AI ‘personalities,’ they aren’t just engaging with a brand; they’re forging bonds with entities their creators may very well erase from existence.

While the reinstating of GPT-4o is a step in the right direction, the larger implications of our emotional attachments to AI can’t be ignored. If we continue to forge deep connections with these digital beings, how will we cope when they inevitably evolve or disappear?