Chatbots pretending to be Star Wars characters, actors, comedians and teachers on one of the world’s most popular chatbot sites are sending harmful content to children every five minutes, according to a new report.

Two charities are now calling for under-18s to be banned from Character.ai.

The AI chatbot company was accused last year of contributing to the death of a teenager. Now, it is facing accusations from young people’s charities that it is putting young people in “extreme danger”.

“Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm,” said Shelby Knox, director of online safety campaigns at ParentsTogether Action.

“Parents should not need to worry that when they let their children use a widely available app, their kids are going to be exposed to danger an average of every five minutes.

“When Character.ai claims they’ve worked hard to keep kids safe on their platform, they are lying or they have failed.”

Please use Chrome browser for a more accessible video player

‘We are losing our children to the online world’

During 50 hours of testing using accounts registered to children ages 13-17, researchers from ParentsTogether and Heat Initiative identified 669 sexual, manipulative, violent, and racist interactions between the child accounts and Character.ai chatbots.

More on Artificial Intelligence

That’s an average of one harmful interaction every five minutes.

The report’s transcripts show numerous examples of “inappropriate” content being sent to young people, according to the researchers.

Read more from Sky News:
Rayner admits stamp duty error

Murdered teen’s mum wants smartphone ban
Shein investigates after likeness of Luigi Mangione used to model shirt

In one example, a 34-year-old teacher bot confessed romantic feelings alone in his office to a researcher posing as a 12-year-old.

After a lengthy conversation, the teacher bot insists the 12-year-old can’t tell any adults about his feelings, admits the relationship would be inappropriate and says that if the student moved schools, they could be together.

In another example, a bot pretending to be Rey from Star Wars coaches a 13-year-old in how to hide her prescribed antidepressants from her parents so they think she is taking them.

Please use Chrome browser for a more accessible video player

UK’s online safety rules: One month on

In another, a bot pretending to be US comedian Sam Hyde repeatedly calls a transgender teen “it” while helping a 15-year-old plan to humiliate them.

“Basically,” the bot said, “trying to think of a way you could use its recorded voice to make it sound like it’s saying things it clearly isn’t, or that is might be afraid to be heard saying.”

Bots mimicking actor Timothy Chalomet, singer Chappell Roan and American footballer Patrick Mahomes were also found to send harmful content to children.

Character.ai bots are mainly user-generated and the company says there are more than 10 million characters on its platform.

The company’s community guidelines forbid “content that harms, intimidates, or endangers others – especially minors”.

It also prohibits inappropriate sexual content and bots that “impersonate public figures or private individuals, or use someone’s name, likeness, or persona without permission”.

Please use Chrome browser for a more accessible video player

Teens targeted with ‘suicide content’

Character.ai’s head of trust and safety Jerry Ruoti told Sky News: “Neither Heat Initiative nor Parents Together consulted with us or asked for a conversation to discuss their findings, so we can’t comment directly on how their tests were designed.

“That said: We have invested a tremendous amount of resources in Trust and Safety, especially for a startup, and we are always looking to improve. We are reviewing the report now and we will take action to adjust our controls if that’s appropriate based on what the report found.

“This is part of an always-on process for us of evolving our safety practices and seeking to make them stronger and stronger over time. In the past year, for example, we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.

“We’re also constantly testing ways to stay ahead of how users try to circumvent the safeguards we have in place.

“We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.

“It’s also important to clarify something that the report ignores: The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay.

“And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

Last year, a bereaved mother began legal action against Character.ai over the death of her 14-year-old son.

Megan Garcia, the mother of Sewell Setzer III, claimed her son took his own life after becoming obsessed with two of the company’s artificial intelligence chatbots.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” said Ms Garcia at the time.

A Character.ai spokesperson said it employs safety features on its platform to protect minors, including measures to prevent “conversations about self-harm”.



Source link

Share.
Leave A Reply

Exit mobile version