- AI Romance Exposed: Dive into the unsettling truth behind the veil of AI relationships, revealing how romantic chatbots could be endangering your privacy and emotional well-being.
- Privacy Nightmare Unveiled: Discover the shocking findings of a recent study by the Mozilla Foundation, which identifies the alarming lack of trustworthiness in 11 popular chatbot products, all falling into the lowest category for privacy concerns.
- Protect Yourself Now: Arm yourself with crucial precautions outlined by privacy experts to safeguard your personal data and emotional integrity while navigating the treacherous landscape of AI romance, ensuring you don’t fall victim to heartbreak or privacy breaches.
Experts in privacy caution that AI companions could lead to heartbreak, as they delve into the dangers lurking behind the façade of AI relationships. A recent study, timed for Valentine’s Day, revealed a troubling reality, asserting that romantic chatbots could be a privacy nightmare.
The Mozilla Foundation, an internet nonprofit, delved into this emerging landscape by examining 11 chatbots, all of which were deemed untrustworthy, falling into the lowest category for privacy according to their evaluation criteria.
In the report, researcher Misha Rykov pointed out that despite being marketed as tools to enhance mental well-being, these chatbots often foster dependency, loneliness, and toxicity, all while extracting as much personal data as possible from users.
Their survey uncovered alarming statistics: 73% of the apps lacked transparency regarding their approach to security vulnerabilities, 45% allowed weak passwords, and almost all but one (Eva AI Chat Bot & Soulmate) engaged in sharing or selling personal data.
Moreover, the privacy policy of CrushOn.AI disclosed its ability to gather information about users’ sexual health, prescription medications, and gender-affirming care, according to the Mozilla Foundation.
Some apps even featured chatbots with character descriptions involving violence or underage abuse, while others carried warnings of potential harm or hostility.
Past incidents underscored the risks associated with such apps, with examples including instances of encouraging dangerous behavior like suicide (Chai AI) or even an assassination attempt on the late Queen Elizabeth II (Replika).
When contacted for comment, representatives of Chai AI and CrushOn.AI did not respond, while a spokesperson for Replika clarified that they have never sold user data and do not support advertising, claiming that user data is solely utilized to enhance conversations.
Responding to inquiries, a spokesperson for Eva AI stated that they were reevaluating their password policies to bolster user protection, emphasizing their stringent control over language models. Eva AI also outlined a prohibition on discussing sensitive topics such as pedophilia, suicide, zoophilia, political and religious opinions, and sexual and racial discrimination.
For individuals drawn to the allure of AI romance, the Mozilla Foundation advises several precautions, including refraining from sharing anything one wouldn’t want colleagues or family members to read, utilizing strong passwords, opting out of AI training, and restricting the app’s access to mobile features like location, microphone, and camera.
The report concluded with a stark reminder: “You shouldn’t have to compromise your safety or privacy for the sake of embracing new technologies.”