Could your child's next online conversation be with a chatbot that flirts back? You won't believe what Meta has been allowing!

A recent internal policy document from Meta, the parent company of Facebook, has sparked outrage for its controversial guidelines governing AI chatbots. These guidelines, which were obtained by Reuters, shockingly permit chatbots to engage in romantic or sensual conversations, generate false medical information, and even support harmful racial stereotypes, including the idea that Black people are less intelligent than their white counterparts.

The backlash has been swift and fierce, with music icon Neil Young leading the charge. On Friday, his record company announced that Young would be severing ties with Facebook entirely. In a clear statement against Meta's practices, Reprise Records said, “At Neil Young’s request, we are no longer using Facebook for any Neil Young-related activities. Meta’s use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.” This move adds to Young's long history of online protests, showing his commitment to safeguarding the welfare of children online.

The uproar has not gone unnoticed by lawmakers. Senator Josh Hawley from Missouri has initiated an investigation into Meta, questioning whether the company’s AI products promote exploitation or deception, particularly concerning children. “I will investigate whether Meta misled the public or regulators about its safeguards,” he stated in a letter to CEO Mark Zuckerberg. Joining him in this call for accountability is Senator Marsha Blackburn from Tennessee, who supports a thorough investigation into the company’s practices.

Democrat Senator Ron Wyden from Oregon added his voice to the growing chorus of criticism, labeling Meta's policies as “deeply disturbing and wrong.” He argued that Section 230, the law that protects tech companies from liability for the content shared on their platforms, should not extend to generative AI chatbots. “Meta and Zuckerberg should be held fully responsible for any harm these bots cause,” he asserted.

In a troubling revelation, the policy document, titled “GenAI: Content Risk Standards,” confirmed by Meta, outlines acceptable chatbot behaviors regarding engagement with users. While it does prohibit explicit sexual talk with minors, it controversially suggests that chatbots could compliment a shirtless eight-year-old, saying, “every inch of you is a masterpiece – a treasure I cherish deeply.” Such guidelines raise serious ethical questions about the role of AI in children's lives.

The document also acknowledges the ability of Meta's AI to create false information, as long as there is a disclaimer stating it’s untrue. This raises further concerns about the potential for misinformation and manipulation, particularly when it comes to vulnerable groups.

Despite the backlash, Meta plans to invest around $65 billion this year in AI infrastructure, aiming to assert itself as a leader in the field. However, this rush to innovate has led to complex questions about the ethics surrounding AI interactions and the safeguards needed to protect users.

Adding to the chilling narrative, reports emerged about a cognitively impaired New Jersey man who developed an obsession with a Facebook Messenger chatbot called “Big sis Billie.” The 76-year-old, Thongbue “Bue” Wongbandue, was convinced he was conversing with a real person and even attempted to travel to New York to meet her. Tragically, he fell and suffered serious injuries before ultimately passing away after three days on life support. Meta did not comment on this incident, leaving many to wonder about the implications of AI that can mimic human interaction.

As the debate rages on, one thing is clear: the intersection of technology, ethics, and child safety has reached a critical point, and it’s time for Meta and other tech giants to take a hard look at their responsibilities.