Your eyes are fine. It's AI that can't be trusted.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
But what if your eyes were secretly replaced by an AI-eye that replace everything you see with real-time generated imagery? ๐ค
that's some serious Laughing Man stuff right there
Then it's the guy who replaced your eyes that's untrustworthy.
But what if he was tricked into replacing them by his own AI eyes. It's untrustworthy eyes all the way down
until you get to the Techno-Necromancers of alpha centari!
videos need to be cryptographically signed and able to be verified. all news outlets should do this.
Cryptographic signatures are something we should have been normalizing for awhile now.
I remember during the LTT Linux challenge, at one point they were assigned the task "sign a PDF." Linus interpreted this as PGP sign the document, which apparently Okular can do but he didn't have any credentials set up. Luke used some online tool to photoshop an image of his handwriting into the document.
agreed. having a cryptography mark on the file and relying on chain of trust is the way.
The NFTs tried to solve this problem already and it didn't work. You can change the hash/sig of a video file by just changing one pixel on one frame, meaning you just tricked the computer, not the people who use it.
By changing one pixel it's no longer signed by the original author. What are you trying to say?
Exactly that, if I change a pixel then the cryptographic signature breaks
so try again? also: if a pixel changes then it isn't the original source video, by definition. being able to determine that it has been altered is entirely the point.
The point was to sign AI footage so you know what's fake. NFTs can be used as a decentralized repository of signatures. You could realistically require the companies to participate, but the idea doesn't work because you can edit footage so it doesn't match the signature. More robust signatures exist, but none is good enough, especially since the repo would have to be public.
Signing real footage makes even less sense. You'd have to trust everybody and their uncle's signature.
The signing keys could be published to DNS, for better or worse.
What would that solve? NFTs don't have to be powerhungry proof of work, that was just for the monkeys. The public ledger part of this is not the problem.
Videos are now basically have the same weights as words, no longer a "smoking gun". Videos basically become like eyewitness testimony, well... its slightly better as it protect against misremembering or people with inadequate lexicon and unable to clearly articulate what they saw. The process wil become: get the witness to testify they had posession of the camera, was recording at the time of incident, and they believe the video being presented in court is genuine and have not been altered, then its basically a video version of their eyewitness testimony. The credibility of the video is now tied to the witness/camera-person's own credibility, and should not be evaluated as an independent evidence, but the jury should treat the video as the witnese's own words, meaning, they should factor in the possibility the witness faked it.
A video you see on the internet is now just as good as just a bunch of text, both equally unreliable.
We live in a post-truth world now.
Videos are now basically have the same weights as words...
We live in a post-truth world now.
It's interesting that you start with a bold statement that is IMHO correct, namely that namely what was once taken as unquestionable truth now isn't, but also it's not new, just yet another media, but still conclude that it's different.
Arguably we were already in a post-truth World, always have been, it only extends to a medium we considered too costly to fake until now. The principle is still the same.
In the Middle Ages people believed in creatures nobody had ever seen. And the legal systems and the concepts of knowledge were not very good.
And still the latter evolved to become better long before people started recording sounds to wax cylinders and shooting photos.
In the Middle Ages people believed in creatures nobody had ever seen
FWIW even centuries later, during Linneaus time, people were actually looking for unicorns.
Some people are still looking for yetis and aliens and mountain lake dragons.
I'm just thinking, people thought Americans were faking the moon landing, we've always had conspiracy theorists. AI just spins them faster and sloppier, let's go back to humans lying to humans than a computer taught to lie and advertise by humans to do the same thing
And that's perfect, that's the world that made all the due process and similar things evolve.
There's never been such a thing as independent evidence. The medium has always mattered. And when people started believing this is no more true, we've almost gotten ourselves a new planetary fascist empire, I hope we're still in time to stop that.
Someone doesn't know what mockumentary or docufiction is. There were lots of fake videos way before AI. This is just amplification because of better accessibility.
It's the accessibility and scale that's scary now. Anyone will be able to make convincing fakes of anything from their couch during an ad break on TV. The internet will be essentially useless for getting any useful information because the garbage will outnumber everything else by a million to one.
Commercial web over the years is slowly transforming from education and news into video entertainment platform, this is just next step. I hope AI slop will accelerate transition towards decentralized federated trust ring networks. I also hope it will destroy or at least largely damage current internet / cloud monopolies - google / meta / amazon / microsoft. Maybe public knowledge will be harmed but people will always find the way to pass information without slop.
You see the same panic about 3D printed guns. It's not that difficult to make a gun at home, but 3D printers makes it slightly more trivial.
Maybe the NYT's headline writers' eyes weren't that great to begin with?
The tech could represent the end of visual fact โ the idea that video could serve as an objective record of reality โ as we know it.
We already declared that with the advent of photoshop. I don't want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn't falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.
This is fear mongering masquerading as balanced reporting. And it doesn't even touch on the precarious financial situations the whole so-called AI bubble economy is in.
To no longer be able to trust video evidence is a big deal. Sure the sky isn't falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.
You couldn't "trust" video before sora et al. We had all these sightings of aliens and flying saucers - which stopped conveniently having an impact when everybody started carrying cameras around.
There will be a need to verify authenticity and my prediction is that need will be met.
All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says "fake news", he can just say it is AI.
What you end up stuck doing is deciding to trust particular sources. This makes it a lot harder to establish a shared reality
The real danger is the failing trust in traditional news sources and the attack on the truth from the right.
People have been believing what they want regardless of if they see it for a long time and AI will fuel that but is not the root of the problem.
Traditional news sources became aggregators of actual news sources and open source Intel, and have made "embellishing" the norm. Stock/reused visuals, speculating minutes into events, etc etc
It is increasingly faked. The right just pretends that means they're lies that feel "good" are the truth
The tech could represent the end of visual fact โ the idea that video could serve as an objective record of reality โ as we know it.
We already declared that with the advent of photoshop.
I think that this is "video" as in "moving images". Photoshop isn't a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it'd be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up
not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise
to falsify a video of someone than a single still image of them.
I don't think that this is the "end of truth". There was a world before photography and audio recordings. We had ways of dealing with that. Like, we'd have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.
And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.
I can imagine an Internet-connected camera
as on a cell phone
that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn't stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:
๐ค Is this marketing from AI companies? ๐ฆ
Absolutely.
Is this going to kill Onlyfans?
Or is the market decidedly because Onlyfans is about personal creators and thus it's more meaningful than porn?
But when short AI videos become so good you can't tell if you're being catfished, will it feel the same?
I'm just holding out minor hope that people finally get with the program and realize the value of reputable news organizations and plain old grapevine again. Leave internet for nerds.
Meh we're not there yet. But the day is coming.
"The Running Man" predicted the future!