this post was submitted on 19 Dec 2023
1 points (100.0% liked)
Technology
59605 readers
3435 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Time for dropbox users to upload all kinds of crap for ai to "learn" from, all within tos of course.
I bet there are many kinds of ways to make your files poison the ai learning data. Its going to be fun for those ai guys to sort which files are probably safe and which are not. I think even if ONE user manages to slip something that corrupts the training data and its not noticed soon enough it might cause problems for them. Though someone who actually knows something about the subject might want to tell if i'm talking shit or not.
I'm not against ai in general, but if its trained with data that was obtained from unwilling people, like this, then its makers can fuck off.
It really depends on what the AI training is looking for. You can potentially poison an AI training model, but you'll likely have to add enough data to be statistically relevant.
enough data as in many different people will have to upload one or two files that contain such data or you have to upload very large file that contains a lot of data that causes problems?
It's honestly difficult for me to say because there are so many different ways to train AI. It really depends more on what the trainers configure to be a data point. Volume of files vs size of a single file aren't as important as what the AI believes is a data point and how the data points are weighted.
Just as a simple example, a data point may be considered a row on a spreadsheet without regard for how that data was split up across files. So ten files with 5 rows each might have the same weight as one file with 50 rows. But there's also a penalty concept in some models, so the trainer can set it so that data that all comes from one file may be penalized. Or the opposite could be true if data coming from the same file is deemed to be more important in some way.
In terms of how AIs make their decisions, that can also vary. But generally speaking, if 1000 pieces of data are used that are all similar in some way and one of them is somewhat different from the others, it is less likely that that one-off data will be used. It's much more likely to have an effect If 100 of the 1000 pieces of data have that same information. There's always the possibility of using that 1/1000 data, it's just less likely to have a noticeable effect.
AIs build confidence in responses based on how much a concept is reinforced, so you'd have to know something about the training algorithm to be able to intentionally impact the results.
thank you, this was the kind of information i was hoping for