this post was submitted on 29 Oct 2025
412 points (98.1% liked)
Not The Onion
18499 readers
1166 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's a NSFW option that you have to turn on. She's using her son for clout. "Journalism"
The article says that wasn't enabled. Of course she could be lying, but I don't know that anymore than anything else. If you were to just ask me generally, "do you think that AI would ever do something it's not supposed to?" My answer would be "of course."
That's fair, I misread what she said. I removed my own upvote. I would be surprised she is lying but I wouldn't be surprised someone else used the car before her and was talking dirty to the telsa. It has context of previous conversations.
Why should a car chatbot be asking for nudes, unpromptred, at all?
My best guess is someone else was talking dirty to it before it happened, and it was still in the conversation context.
Seems I was mistaken about the NSFW, I wouldn't be surprised if it doesn't wipe the convo if you switch though, which is a bug in any case and their fault.
I doubt it has enough context lenght for that, even if we suspect someone was watching nudes on car's display via Grok.
What is probable - is that somewhere beneath it had a data entry of soccer and nudity together, maybe even as an exact exchange between users (imagine horny boomer commenting under a random facebook post). I suppose that it got triggered by words "soccer" and "mom" appearing together in kid's speech, as this combination means middle-aged woman with kids, and that is also a less popular tag pointing at MILFs.
In her Instagram video, she went back and quized it about the convo. It definitely has context and probably has a small memory file it puts info in.
If not, then it should be easy to replicate I guess.
Context has a value, as it exists as a set of additional tokens, this means slower computing time and more resources. It is limited to some set amount to strike the balance between speed and quality. In a car specific assistant, I guess there is a hard part including chosen tone of responses, informing it about the owner, prioritising car-related things, and also some stored cache of recent conversations. I don't think it can dig deep enough into the past to find anything related to nudes, so I suppose the context itself may have an impact, but not in a direct line A to B.
Reproduction would be hard for that is a black box that got a series of auto-transcribed voice inputs from a family over their ride, none of them are recorded at the time and idk if that thing has user-accessible logs. Chances of getting this absurd response are very thin, and we don't even have the data. We can make another AI that would roll all variations of 'hello I am a minor let's talk soccer' to the Tesla assistant of relevant release until it triggers it again, but, well, it's seemingly close to miliions of monkey with typewriters at this point.
And what we would have then is, well, an obvious answer that training data obviously has garbage in it, just by it's sheer volume and randomness of the internet, and that it can sometimes reproduce said garbage.
But the question itself is more about what other commenters pointed out: we have AI shoveled down on us, but rarely even talk about it's safety. There were articles about people using these as a psychological self-help tool, we see them put into search engines and Windows, there's a lot going on with that tech marvel or bubble without anyone asking first if we are supposed to use it in different contexts the first place.
This weird anecdote about sexting chatbot opens the conversation from the traditional angle of whataboutkids(tm), and it is interesting how it would affect things, if it would.
I mean, xAI isn't specific to cars.