this post was submitted on 03 Apr 2024
960 points (99.4% liked)

Technology

59605 readers
3438 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] guyrocket@kbin.social 116 points 7 months ago (10 children)

I think we need to STOP calling it "Artificial Intelligence". IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.

[–] TurtleJoe@lemmy.world 35 points 7 months ago

A term created in order to vacuum up VC funding for spurious use cases.

[–] UsernameIsTooLon@lemmy.world 27 points 7 months ago (2 children)

It's the new "4k". Just buzzwords to get clicks.

[–] lemann@lemmy.dbzer0.com 19 points 7 months ago (1 children)

My disappointment when I realised "4k" was only 2160p 😔

[–] boeman@lemmy.world 9 points 7 months ago

I can't disagree with this... After basing the size off of the vertical pixel count, we're now going to switch to the horizontal count to describe the resolution.

[–] exocortex@discuss.tchncs.de 13 points 7 months ago* (last edited 7 months ago) (1 children)

on the contrary! it's a very old buzzword!

AI should be called machine learning. much better. If i had my way it would be called "fancy curve fitting" henceforth.

[–] Hackerman_uwu@lemmy.world 1 points 7 months ago (1 children)

Technically speaking AI is any effort on the part of machines to mimic living things. So computer vision for instance. This is distinct from ML and Deep Learning which use historical statistical data to train on and then forecast or simulate.

[–] exocortex@discuss.tchncs.de 6 points 7 months ago (1 children)

"machines mimicking living things" does not mean exclusively AI. Many scientific fields are trying to mimic living things.

AI is a very hazy concept imho as it's difficult to even define when a system is intelligent - or when a human is.

[–] Hackerman_uwu@lemmy.world 3 points 7 months ago* (last edited 7 months ago) (2 children)

That’s not what I said.

What I typed there is not my opinion.

This the technical, industry distinction between AI and things like ML and Neural networks.

“Mimicking living things” is obviously not exclusive to AI. It is exclusive to AI as compared to ML, for instance.

[–] maynarkh@feddit.nl 1 points 7 months ago (1 children)

There is no technical, industry specification for what AI is. It's solely and completely a marketing term. The best thing I've heard is that you know it's ML if the file extension is cpp or py, and you know it's AI if the extension is pdf or ppt.

I don't see how "AI" is mimicking living things while neural networks are, just because neural networks are based on neurons, the living things in your head.

[–] Hackerman_uwu@lemmy.world 1 points 7 months ago (1 children)

Incorrect. 15 years in the industry here. Good day.

[–] maynarkh@feddit.nl 1 points 7 months ago
[–] Hamartiogonic@sopuli.xyz 19 points 7 months ago* (last edited 7 months ago)

Optical Character Recognition used to be firmly in the realm of AI until it became so common that even the post office uses it. Nowadays, OCR is so common that instead of being proper AI, it’s just another mundane application of a neural network. I guess, eventually Large Language Models will be outside there scope of AI.

[–] rdri@lemmy.world 2 points 7 months ago (2 children)

How is guided pattern recognition is different from imagination (and therefore intelligence) though?

[–] Natanael@slrpnk.net 6 points 7 months ago* (last edited 7 months ago) (2 children)

There's a lot of other layers in brains that's missing in machine learning. These models don't form world models and ~~some~~don't have an understanding of facts and have no means of ensuring consistency, to start with.

[–] rdri@lemmy.world 2 points 7 months ago* (last edited 7 months ago)

I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won't be improved by much even if we made them closer to real "intelligence".

The point is that reconstruction itself can't reliably produce missing details, not that a "properly intelligent" mind will be any better at it than current ai.

[–] lightstream@lemmy.ml 2 points 7 months ago (1 children)

They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

[–] Natanael@slrpnk.net 1 points 7 months ago (1 children)

Statistical associations is not equivalent to a world model, especially because they're neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

[–] lightstream@lemmy.ml 1 points 7 months ago (1 children)

It models only use of language

This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence "The stolen painting was found by a tree", you need to know what a tree is in order to interpret this correctly.

You can't really use language *unless* you have a model of the universe.

[–] Natanael@slrpnk.net 1 points 7 months ago* (last edited 7 months ago) (1 children)

But it doesn't model the actual universe, it models rumor mills

Today's LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you

[–] lightstream@lemmy.ml 1 points 7 months ago (1 children)

They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

[–] Natanael@slrpnk.net 1 points 7 months ago

They are more useful for quick templates than problem solving

[–] Jesus_666@lemmy.world 1 points 7 months ago (1 children)

Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.

They're image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don't reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.

It's no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.

[–] rdri@lemmy.world 4 points 7 months ago (1 children)

They don't reconstruct anything and they have no understanding of what the image contains.

With enough training they, in fact, will have some understanding. But that still leaves us with that "enhance meme" problem aka the limited resolution of the original data. There are no means to discover what exactly was hidden between visible pixels, only approximate. So yes you are correct, just described it a bit differently.

[–] lightstream@lemmy.ml 1 points 7 months ago (1 children)

they, in fact, will have some understanding

These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.

What they do not have (and IMO won't ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It's very illuminating on the subject of what consciousness is, by providing a new example of what it is not.

[–] rdri@lemmy.world 0 points 7 months ago

I think AI doesn't need consciousness to be able to say what is on the picture, or to guess what else could specific details contain.

[–] Pilferjinx@lemmy.world 2 points 7 months ago

What is the definition of intelligence? Does it require sentience? Can a data set be intelligently compiled into interesting results without human interaction? Yes the term AI is stretched a bit thin but I believe it has enough substance to qualify.

[–] postmateDumbass@lemmy.world 2 points 7 months ago

My Concious Cognative Correlator is the real shit.

[–] Gabu@lemmy.world 2 points 7 months ago (2 children)

I do not consider guided pattern recognition to be intelligence.

That's a you problem, this debate happened 50 years ago and we decided Intelligence is the right word.

[–] kibiz0r@midwest.social 0 points 7 months ago (1 children)

Good thing there have been no significant changes to technology, psychology, philosophy, or society in the past 50 years.

[–] Gabu@lemmy.world 0 points 7 months ago

Fallacious reasoning.

[–] NaoPb@eviltoast.org -1 points 7 months ago (2 children)

You forget that we can change these definitions any time we see fit.

[–] ricdeh@lemmy.world 3 points 7 months ago (2 children)

You cannot, because you are not a scientist and judging from your statements, you do not know what you're talking about.

[–] NaoPb@eviltoast.org 1 points 7 months ago

It seems you are sadly stuck in your own thought patterns.

It does not take a scientist to change things. It takes a society to change definitions.

[–] guyrocket@kbin.social 1 points 7 months ago (1 children)
[–] PipedLinkBot@feddit.rocks 1 points 7 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=mNPh2z3W_WY

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] Gabu@lemmy.world 2 points 7 months ago

We could... if it made any sense to do so, which it doesn't.

[–] CileTheSane@lemmy.ca 1 points 7 months ago (1 children)

I do not consider guided pattern recognition to be intelligence.

Humanity has entered the chat

Seriously though, what name would you suggest?

[–] guyrocket@kbin.social 0 points 7 months ago (1 children)

Maybe guided pattern recognition (GPR).

Or Bob.

[–] CileTheSane@lemmy.ca 0 points 7 months ago (1 children)

Calling it Bob is not going to help discourage people from attributing intelligence. They'll start wishing "Bob" a happy birthday.

Do not personify the machine.

[–] guyrocket@kbin.social 1 points 7 months ago

Maybe Boob then.

[–] ricdeh@lemmy.world 1 points 7 months ago (2 children)

You, and humans in general, are also just sophisticated pattern recognition and matching machines. If neural networks are not intelligent, then you are not intelligent.

[–] buddascrayon@lemmy.world 3 points 7 months ago

This may be the dumbest statement I have yet seen on this platform. That's like equating a virus with a human by saying both things replicate themselves so they must be similar.

[–] Chakravanti@sh.itjust.works -1 points 7 months ago

You can say what you like but absolutely zero true and full understand of what human intelligence actually is or how it works.

"AI", or whatever you want to call it, is not at all similar.

[–] 01189998819991197253@infosec.pub 1 points 7 months ago

I agree. It's restricted intelligence (RI), at best, and even that can be argued against.