this post was submitted on 10 Nov 2024
143 points (92.3% liked)

Technology

59495 readers
3050 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] wieson@feddit.org 27 points 1 week ago* (last edited 1 week ago) (1 children)

Sorry, big derailment of subject here:

The author described 40cm of rain, which was unusual to me, since we normally describe the rain in millimetres.

Then they translated it to American as 16 inches or 70 gallons per square yard.

The neat thing about 400 mm is, that it's also 400 litres per square metre.

And it's also crazy much, my heart goes out to Valencia.

[–] AnUnusualRelic@lemmy.world 3 points 1 week ago

The author described 40cm of rain, which was unusual to me, since we normally describe the rain in millimetres

That's the point of sensible units. It's exactly the same thing.

[–] todd_bonzalez@lemm.ee 24 points 1 week ago

We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.

That's a poor conclusion. A similar image could be created using masks and AI inpainting. You could take a photo on a rainy day and add in the disaster components using GenAI.

That's definitely not the case in this scenario, but we shouldn't rely on things like verifying real-world locations to assume that GenAI wasn't involved in making a photo.

[–] cypherpunks@lemmy.ml 21 points 1 week ago* (last edited 1 week ago)

big oof.

We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.

the author of this substack is woefully misinformed about the state of technology 🤦

it has, in fact, been possible for several years already for anyone to quickly generate convincing images (not to mention videos) of fictional scenes in real locations with very little effort.

The photograph—which appeared on the Associated Press feed, I think—was simply taken from a higher vantage point.

Wow, it keeps getting worse. They're going full CSI on this photo, drawing a circle around a building on google street view where they think the photographer might have been, but they aren't even going to bother to try to confirm their vague memory of having seen AP publishing it? wtf?

Fwiw, I also thought the image looked a little neural network-y (something about the slightly less-straight-than-they-used-to-be lines of some of the vehicles) so i spent a few seconds doing a reverse image search and found this snopes page from which i am convinced that that particular pileup of cars really did happen as it was also photographed by multiple other people.

[–] FaceDeer@fedia.io 15 points 1 week ago

The "how will we know if it's real" question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.

"Is there a photo of the thing" has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.

If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.

And it’s gonna get worse, because it’s a very lucrative industry AND it’s highly effective for propaganda.

[–] a4ng3l@lemmy.world 8 points 1 week ago (1 children)

Maybe we could stop giving a platform to the crazies that foster those stories. Both of them; the idiots that see ai artefacts everywhere but also the fear mongers of the sort of the blog here. It reminds me of « be afraid of rpgs » in the 80ies and then « videos games are going to turn teens in murderers » in the 90ies… every new tech has curves for their maturity, cultural & societal fit. We just so happen to be at the shitty times for ai. But eventually the fad will go away, most crazies will move to something else and attention whores will also find a new niche.

[–] aesthelete@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

every new tech has curves for their maturity, cultural & societal fit.

I'd believe this "nothing to see here" narrative if recent "advances" such as social media didn't have measurable negative impacts. Things can get worse, and technology can assist that.

The voices coming out as skeptical of things, and the watchdogs telling you early on that these newly introduced things may present a problem are ultimately part of the apparatus that gets you "cultural and societal fit". That doesn't happen automatically and it's called "the bleeding edge" for a reason.

Ultimately, I'm also not so sure about AI being a fad at this point. It sure looks like enough capital is invested in this stuff to make it be a thing even if nobody wants it.

[–] FiskFisk33@startrek.website 5 points 1 week ago (2 children)

The photo seems off somehow, I wonder if it is taken with a phone with some kind of AI sharpening algorithm.

[–] kate@lemmy.uhhoh.com 2 points 1 week ago

i’ve seen another trend lately where any edited photo gets labelled as AI, even when traditional editing methods are more likely

[–] LiPoly@lemmynsfw.com 1 points 1 week ago

I think it’s just because all the stuff has so much sludge from the flood on it that it looks washed up, like most AI content does. There are almost no straight edges, just like with AI, because everything has been roughed up by the water.