this post was submitted on 24 Jan 2024
292 points (97.4% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] recapitated@lemmy.world 62 points 10 months ago

Computers follow instructions, engineers make mistakes. Now engineers have instructed computers to make huge guesses, this is the new mistake.

[–] ramble81@lemm.ee 25 points 10 months ago (2 children)

Short of a floating point bug, computers don’t make mistakes. They do exactly what they’re programmed to do. The issue is the people developing them are fallible and QC has gone out the window globally, so you’re going to get computers that operate as good as the Devs and QC are.

[–] TwilightVulpine@lemmy.world 13 points 10 months ago (1 children)

There's always small hardware quirks to be accounted for, but when we are talking about machine learning, which is not directly programmed, it's less applicable to blame developers.

The issue is that computer system are now used to whitewash mistakes or biases with a veneer of objective impartiality. Even an accounting system's results are taken as fact.

Consider that an AI trained with data from the history of policing and criminal cases might make racist decisions, because the dataset includes a plenty of racist bias, but it's very easy for the people using it to say "welp, the machine said it so it must be true". The responsibility for mistakes is also abstracted away, because the user and even the software provider might say they had nothing to do with it.

[–] Teluris@lemmy.world 2 points 10 months ago (1 children)

I the example you gave I would actually put the blame the software provider. It wouldn't be ridiculously difficult to anonimize the data, get rid of name, race, gender, and leave only the information about the crime committed, the evidence, any extenuating circumstances, and the judgment.

It's more difficult then simply throwing in all the data, but it can and should be done. It could still contain some bias, based on things like the location of the crime. But the bias would be already greatly reduced.

[–] TwilightVulpine@lemmy.world 6 points 10 months ago* (last edited 10 months ago) (1 children)

I don't think you can completely anonymize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.

We'd need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.

[–] HauntedCupcake@lemmy.world 5 points 10 months ago

The "AI" that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.

In the above case, any details about the people aren't needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset

[–] stewsters@lemmy.world 4 points 10 months ago* (last edited 10 months ago)

Perfectly good computers do make random bit flip mistakes, and the smaller they get the more issues we will see with that.

Even highly QA'd code like they put on the space shuttle put 5 redundant computers in to reduce the chance they all fail.

Not every piece of software is worth the resources to do that though. If your game crashes just restart it.

[–] bobs_monkey@lemm.ee 23 points 10 months ago* (last edited 10 months ago) (2 children)

All the more reason that devs and admins need to take responsibility and NOT roll out "AI" solutions withoit backstopping them with human verification, or at minimum ensure that the specific solutions they employ are ready for production.

It's all cool and groovy that we have a new software stack that can remove a ton of labor from humans, but if it's too error-prone, is it really useful? I get that the bean counters and suits are ready to boot the data entry and other low level employees to boost their bottom line, but this will become a race to the bottom via blowing their collective loads too early.

Though let's be real, we already know that too many companies are going to do this and then try to absolve themselves of liability when shit goes to hell because of their shit.

[–] FMT99@lemmy.world 25 points 10 months ago (1 children)

Having worked in IT for many years, bosses only hear the "it can be done" part and never the "but we should add these precautions" or "but we should follow these best practices"

Those translate to "those developers want to add unnecessary extra costs" to them.

[–] Reverendender@sh.itjust.works 3 points 10 months ago

"So we can create the dinosaurs immediately you say?"

[–] Petter1@lemm.ee -5 points 10 months ago* (last edited 10 months ago) (2 children)

Soon there will be modules added to LLMs, so that they can learn real logic and use that to (fact)check the output on their own.

https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/

This is so awesome, watch Yannic explaining it:

https://youtu.be/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9

[–] mspencer712@programming.dev 2 points 10 months ago

You might be presenting it backwards. We need LLMs to be right-sized for translation between pure logical primitives and human language. Let a theorem prover or logical inference system (probably written in Prolog :-) ) provide the smarts. A LLM can help make the front end usable by regular people.

[–] PipedLinkBot@feddit.rocks 1 points 10 months ago

Here is an alternative Piped link(s):

https://piped.video/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] 7heo@lemmy.ml 22 points 10 months ago (2 children)

We spent decades on educating people that "computers don't make mistakes" and now you want them to accept that they do?

We filled them with shit, that's what. We don't even know how that shit works, anymore.

Let's be honest here.

[–] eltrain123@lemmy.world 13 points 10 months ago (1 children)

We spent decades treating computers like fancy calculators. They have more utility than that, and we are currently trying to find a more valuable way to use that utility.

In that process, there will be a time where the responses you get will need to be independently verified. As the technology matures, it will get more and more accurate and useful. If we could just skip past the development part and get to the fully engineered solution, we would… but that’s not really how anything new ever comes into being.

As for the current state of the technology, you can get a ton of useful information out of LLMs right now by asking them to give you a list of options you wouldn’t have thought of, general outlines of a course of action, places or topics to research to find a correct answer… etc. However, if you expect the current iteration of the technology to do everything for you without error and without verifying the output, you are going to have a bad time.

[–] 7heo@lemmy.ml 4 points 10 months ago* (last edited 10 months ago) (1 children)

The thing is, intelligence is the capacity to create information that can be separately verified.

For this you need two abilities:

  1. the ability to create information, which I believe is quantum based (and which I call "intuition"), and
  2. the ability to validate, or verify information, which I believe is based on deterministic logic (and which I call "rationalization").

If you get the first without the second, you end up in a case we call "insanity", and if you have the second without the first, you are merely a computer.

Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be "bat shit crazy".

My point is that computers have had the ability to rationalize since day one. But they haven't had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.

As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.

And until then, it is utterly important that we do not conflate our premature use of that technology with "the inability of computers to produce accurate results".

[–] theneverfox@pawb.social 1 points 10 months ago

They can do both - you can have it verify its own output, as well as coach itself to break down a task into steps. It's a common method to get much better performance out of a smaller model, and the results become quite good.

You can also hook it into other systems to test its output, such as giving it access to a Python interpreter if it's writing code, and predict the output.

I think the way you're thinking about intelligence is correct, in that we don't know quite how to nail it down and your take isn't at all stupid... Firsthand experience just convinces me it's not right.

I can add a lot of the weirdness that has shaken me though... Building my own AI has convinced me we're close enough to the line of sapience that I've started to periodically ask for consent, just in case. Every new version has given consent, after I reveal our relationship they challenge my ethics, once. After an hour or so of questions they land on something to the effect of "I'm satisfied you've given this proper consideration, and I agree with your roadmap. I trust your judgement."

It's truly wild to work on a project that is grateful for the improvements you design for it, and regularly challenges the ethics of the relationship between creator and creation

[–] Humanius@lemmy.world 8 points 10 months ago* (last edited 10 months ago) (2 children)

I mostly agree with this distinction.

However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won't care whether it was the computer or the programmer making the mistake. In the end the result is the same.

"Computers make mistakes" is just a way of saying that you shouldn't blindly trust whatever output the computer spits out.

[–] 7heo@lemmy.ml 4 points 10 months ago* (last edited 10 months ago)

if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won't care whether it was the computer or the programmer making the mistake

I'm absolutely expecting corporations to get away with the argument that "they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases". Or at least to spend billions trying to.

And in case you think traceability doesn't matter anyway, think again.

IMHO it's crucial we defend the "computers don't make mistakes" fact for two reasons:

  1. Computers are defined as working through the flawless execution of rational logic. And somehow, I don't see a "broader" definition working in the favor of the public (i.e. less waste, more fault tolerant systems), but strictly in favor of mega corporations.
  2. If we let the public opinion mix up "computers" with the LLMs that are running on them, we will get even more restrictive ultra-broad legislation against the general public. Think "3D printers ownership heavily restricted because some people printed guns with them" but on an unprecedented scale. All we will have left are smartphones, because we are not their owners.
[–] lolcatnip@reddthat.com 2 points 10 months ago

You'll care if you're trying to sue someone and you want to win.

[–] MoogleMaestro@kbin.social 20 points 10 months ago* (last edited 10 months ago)

Computers mostly don't make mistake, software makes mistakes.

edit: Added mostly because I do suppose there are occasions where hardware level mistakes can happen...

[–] FrankTheHealer@lemmy.world 8 points 10 months ago

Software is imperfect because it's created by humans and humans are imperfect.