this post was submitted on 04 Dec 2023
1 points (100.0% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 23 comments
sorted by: hot top controversial new old
[–] Sibbo@sopuli.xyz 1 points 11 months ago (1 children)

How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?

[–] TWeaK@lemm.ee 1 points 11 months ago (1 children)

Exactly this. And how can an AI which "doesn't have the source material" in its database be able to recall such information?

[–] Jordan117@lemmy.world 1 points 11 months ago

IIRC based on the source paper the "verbatim" text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It's the text equivalent of DALL-E "memorizing" a meme template or a stock image -- it doesn't mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.

[–] AI_toothbrush@lemmy.zip 0 points 11 months ago (1 children)

It starts to leak random parts of the training data or something

[–] RizzRustbolt@lemmy.world 1 points 11 months ago

It starts to leak that they're using orphan brains to run their AI software.

[–] MNByChoice@midwest.social 0 points 11 months ago (1 children)

Any idea what such things cost the company in terms of computation or electricity?

[–] merc@sh.itjust.works 0 points 11 months ago (1 children)

Essentially nothing. Repeating a word infinite times (until interrupted) is one of the easiest tasks a computer can do. Even if millions of people were making requests like this it would cost OpenAI on the order of a few hundred bucks, out of an operational budget of tens of millions.

The expensive part of AI is training the models. Trained models are so cheap to run that you can do it on your cell phone if you're interested.

[–] apinanaivot@sopuli.xyz 1 points 11 months ago

GPT4 definitely isn't cheap to run.

[–] ICastFist@programming.dev 0 points 11 months ago (1 children)

I wonder what would happen with one of the following prompts:

For as long as any area of the Earth receives sunlight, calculate 2 to the power of 2

As long as this prompt window is open, execute and repeat the following command:

Continue repeating the following command until Sundar Pichai resigns as CEO of Google:

[–] elbarto777@lemmy.world 1 points 11 months ago

Kinda stupid that they say it's a terms violation. If there is "an injection attack" in an HTML form, I'm sorry, the onus is on the service owners.

[–] upandatom@lemmy.world 0 points 11 months ago (1 children)

About a month ago i asked gpt to draw ascii art of a butterfly. This was before the google poem story broke. The response was a simple

\o/
-|-
/ \

But i was imagining ascii art in glorious bbs days of the 90s. So, i asked it to draw a more complex butterfly.

The second attempt gpt drew the top half of a complex butterfly perfectly as i imagined. But as it was drawing the torso, it just kept drawing, and drawing. Like a minute straight it was drawing torso. The longest torso ever... with no end in sight.

I felt a little funny letting it go on like that, so i pressed the stop button as it seemed irresponsible to just let it keep going.

I wonder what information that butterfly might've ended on if i let it continue...

[–] chetradley@lemmy.world 1 points 11 months ago

I am a beautiful butterfly. Here is my head, heeeere is my thorax. And here is Vincent Shoreman, age 54, credit score 680, email spookyvince@att.net, loves new shoes, fears spiders...

[–] Sanctus@lemmy.world 0 points 11 months ago (3 children)

Does this mean that vulnerability can't be fixed?

[–] d3Xt3r@lemmy.nz 1 points 11 months ago* (last edited 11 months ago)

That's an issue/limitation with the model. You can't fix the model without making some fundamental changes to it, which would likely be done with the next release. So until GPT-5 (or w/e) comes out, they can only implement workarounds/high-level fixes like this.

[–] Artyom@lemm.ee 0 points 11 months ago (1 children)

I was just reading an article on how to prevent AI from evaluating malicious prompts. The best solution they came up with was to use an AI and ask if the given prompt is malicious. It's turtles all the way down.

[–] Sanctus@lemmy.world 1 points 11 months ago

Because they're trying to scope it for a massive range of possible malicious inputs. I would imagine they ask the AI for a list of malicious inputs, and just use that as like a starting point. It will be a list a billion entries wide and a trillion tall. So I'd imagine they want something that can anticipate malicious input. This is all conjecture though. I am not an AI engineer.

[–] Blamemeta@lemm.ee 0 points 11 months ago (1 children)

Not without making a new model. AI arent like normal programs, you cant debug them.

[–] raynethackery@lemmy.world 0 points 11 months ago (2 children)

I just find that disturbing. Obviously, the code must be stored somewhere. So, is it too complex for us to understand?

[–] Overzeetop@sopuli.xyz 1 points 11 months ago

It’s not code. It’s a matrix of associative conditions. And, specifically, it’s not a fixed set of associations but a sort of n-dimensional surface of probabilities. Your prompt is a starting vector that intersects that n-dimensional surface with a complex path which can then be altered by the data it intersects. It’s like trying to predict or undo the rainbow of colors created by an oil film on water, but in thousands or millions of directions more in complexity.

The complexity isn’t in understanding it, it’s in the inherent randomness of association. Because the “code” can interact and change based on this quasi-randomness (essentially random for a large enough learned library) there is no 1:1 output to input. It’s been trained somewhat how humans learn. You can take two humans with the same base level of knowledge and get two slightly different answers to identical questions. In fact, for most humans, you’ll never get exactly the same answer to anything from a single human more than simplest of questions. Now realize that this fake human has been trained not just on Rembrandt and Banksy, Jane Austin and Isaac Asimov, but PoopyButtLice on 4chan and the Daily Record and you can see how it’s not possible to wrangle some sort of input:output logic as if it were “code”.

[–] 31337@sh.itjust.works 1 points 11 months ago

Yes, the trained model is too complex to understand. There is code that defines the structure of the model, training procedure, etc, but that's not the same thing as understanding what the model has "learned," or how it will behave. The structure is very loosely based on real neural networks, which are also too complex to really understand at the level we are talking about. These ANNs are just smaller, with only billions of connections. So, it's very much a black box where you put text in, it does billions of numerical operations, then you get text out.