this post was submitted on 01 Dec 2025
142 points (99.3% liked)

Not The Onion

18899 readers
2953 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] aarch0x40@piefed.social 36 points 1 week ago

Which is exactly the argument to make when you can’t control your product or don’t want to.

[–] lemmy_outta_here@lemmy.world 14 points 1 week ago* (last edited 1 week ago) (1 children)

And I bet teens are going to go on violating the TOS. Maybe we better restrict AI to people who could potentially read and understand the TOS if your product is so dangerous that lifesaving instructions are contained within.

[–] Grimy@lemmy.world 8 points 1 week ago (1 children)

You mean like with some kind of government app that scans your face or takes your ID? I guess open source llms you can run at home will have to be made illegal, or at least require special permits.

[–] lemmy_outta_here@lemmy.world -5 points 1 week ago (1 children)
[–] village604@adultswim.fan 9 points 1 week ago

The things they were saying are bad.

[–] Smorty@lemmy.blahaj.zone 8 points 1 week ago

evil company do evil and say evil thing-

So if a teen shoots up a school even though the gun manufacturor says not to shoot up schools, suddenly we don't have a gun control problem but a teens don't read the TOS problem.

Yeah, that sounds right for the USA.

[–] W3dd1e@lemmy.zip 2 points 1 week ago

The transcripts from that ChatGPT log are bad. OpenAI is fucked if they can’t settle

[–] Die4Ever@retrolemmy.com 1 points 1 week ago* (last edited 1 week ago)

This isn't the first time ChatGPT has had this issue? ChatGPT supports "tools" which means the LLM can call to a traditionally programmed function to do things like search the Internet or do math, why don't they just add a tool for it to be able to report/flag a chat? Everyone knows ChatGPT already isn't private. The prompt could say to report/flag chats about suicide or illegal activity, I think the LLM would pretty reliably report them before they get very far.

I guess this case started back in April though, so maybe they've already done this.

I asked ChatGPT what tools it has, but it's possible there are some it isn't allowed to talk about.

tools

Yes — I support a number of built-in tools. Here is the complete list of tools available in this environment:

All Available Tools

1. web Search the internet or open webpages. Used for fresh, factual, time-sensitive information.

2. python Run private Python code (not visible to the user). Used for internal reasoning, analysis, parsing, etc.

3. python_user_visible Run Python code with visible output — charts, tables, files (PDF, DOCX, XLSX, etc.).

4. canmore Create and update canvas documents (markdown, code, multi-file-style editing).

5. bio Memory system: store or forget user information when requested.

6. api_tool A file-system–like abstraction for discovering and invoking additional tools. (Meta-tool.)

7. image_gen Generate or edit images based on descriptions.

8. container Run commands inside a sandboxed shell-like environment.


If you want, I can also describe what each tool is best for, or show examples of how they’re used.

https://chatgpt.com/share/692f1cc7-3a80-8010-afa2-6576474fae38