naevaTheRat

joined 2 years ago
[–] naevaTheRat@lemmy.dbzer0.com 2 points 2 years ago

9/10 installs malware

[–] naevaTheRat@lemmy.dbzer0.com 1 points 2 years ago

I thought they was saying they didn't mean llms will aid science not that llms wasn't the topic. Ambiguous in reread.

AI isn't well defined which is what I was highlighting with mentions of computer vision etc, that falls into AI and it isn't really meaningfully different from other diagnostic tools. If people mean agi then they should say that, but it hasn't even been established it's likely possible let alone that we're close.

There are already many other intelligences on the planet and not many are very useful outside of niches. Even if we make a general intelligence it's entirely possible we won't be able to surpass fish level let alone human for example. and even then it's not clear that intelligence is the primary barrier in anything, which was what I was trying to point out in my science held back post.

There are so many ifs AGI is a Venus is cloudy -> dinosaurs discussion, you can project anything you like on it but it's all just fantasy.

[–] naevaTheRat@lemmy.dbzer0.com 1 points 2 years ago (2 children)

This seems like splitting hairs agi doesn't exist so that can't be what they mean. AI applies to everything from pathing algorithms for library robots to computer vision and none of those seem to apply.

The context of this post is LLMs and their applications

[–] naevaTheRat@lemmy.dbzer0.com 2 points 2 years ago (1 children)

They uh, still do the same thing fundamentally

Altman isn't gonna let you blow him dude

[–] naevaTheRat@lemmy.dbzer0.com 3 points 2 years ago

Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

  • We've just got all the low hangingfruit
  • Science education isn't available to many people, perspectives are quite limited consequently.
  • power structures are exploitative and ossified, driving away many people
  • industry has too much influence, there isn't much appetite to fund blue sky projects without obvious short term money earning applications
  • patents slow progress
  • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
  • nobody wants to pay scientists, bright people end up elsewhere
[–] naevaTheRat@lemmy.dbzer0.com 3 points 2 years ago (6 children)

Is it? This seems like a big citation needed moment.

Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven't seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

We've had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I'm not sure why LLMs would be? I actually worry they'll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

Why do you think LLMs will revolutionise science?

[–] naevaTheRat@lemmy.dbzer0.com 2 points 2 years ago* (last edited 2 years ago) (4 children)

I think it's really important to keep in mind the separation between doing a task and producing something which looks like the output of a task when talking about these things. The reason being that their output is tremendously convincing regardless of its accuracy, and given that writing text is something we only see human minds do it's so easy to ascribe intent behind the emission of the model that we have no reason to believe is there.

Amazingly it turns out that often merely producing something which looks like the output of a task apparently accidentally accomplishes the task on the way. I have no idea why merely predicting the next plausible word can mean that the model emits something similar to what I would write down if I tried to summarise an article! That's fascinating! but because it isn't actually setting out to do that there's no guarantee it did that and if I don't check the output will be indistinguishable to me because that's what the models are built to do above all else.

So I think that's why we to keep them in closed loops with person -> model -> person, and explaining why and intuiting if a particularly application is potentially dangerous or not is hard if we don't maintain a clear separation between the different processes driving human vs llm text output.

[–] naevaTheRat@lemmy.dbzer0.com 6 points 2 years ago

Sure but it's not like networks get anything from piracy so they have to content themselves with some rather than infinity. Especially for old content, it's just not worth much individually. There's also a looooot of massively overpaid and wasteful people involved in the major networks.

I know it's not just Netflix but you know, poetic licence or something. also I don't really give a shit about being fair to multibillion dollar corporations that do basically nothing pro social :p

[–] naevaTheRat@lemmy.dbzer0.com 13 points 2 years ago (8 children)

Umm penicillin? anaesthetic? the Haber process? the transistor? the microscope? steel?

I get it, the models are new and a bit exciting but GPT wont make it so you can survive surgery, or make rocks take the jobs of computers.

[–] naevaTheRat@lemmy.dbzer0.com 52 points 2 years ago (5 children)

Netflix Buddy, friend, matey. If I have to pop open Google to find where I can watch something, find the best offers on pricing, and how to circumvent ads or whatever, or how to get Netflix to run on my devices without installing invasive crap or derooting my phone etc, and it's actually quite expensive.

I'll just do one search and not worry about whether I'll have to fight ads, or automatic iffy quality settings, weird compression algorithms, device compatibility etc.

I was happy to hang up the peg leg when I could just VPN to usa and watch everything for the price of a lunch a month. I like simplicity, I enjoyed your more arty shows. It was you who changed the deal Netflix, not I. you decided being insanely profitable wasn't enough and you needed infinite growth.

[–] naevaTheRat@lemmy.dbzer0.com 2 points 2 years ago (6 children)

No, they can summarise articles very convincingly! Big difference.

They have no model of what's important, or truth. Most of the time they probably do ok but unless you go read the article you'll never know if they left out something critical, hallucinated details, or inverted the truth or falsity of something.

That's the problem, they're not an intern they don't have a human mind. They recognise patterns in articles and patterns in summaries, they non deterministically adjust the patterns in the article towards the patterns in summaries of articles. Do you see the problem? They produce stuff that looks very much like an article summary but do not summarise, there is no intent, no guarantee of truth, in fact no concern for truth at all except what incidentally falls out of the statistical probability wells.

[–] naevaTheRat@lemmy.dbzer0.com 3 points 2 years ago (8 children)

That's my point. OP doesn't know the maths, has probably never implemented any sort of ML, and is smugly confident that people pointing out the flaws in a system generating one token at a time are just parroting some line.

These tools are excellent at manipulating text (factoring in the biases they have, I wouldn't recommended trying to use one in a multinational corporation in internal communications for example, as they'll clobber non euro derived culture) where the user controls both input and output.

Help me summarise my report, draft an abstract for my paper, remove jargon from my email, rewrite my email in the form of a numbered question list, analyse my tone here, write 5 similar versions of this action scene I drafted to help me refine it. All excellent.

Teach me something I don't know (e.g. summarise article, answer question etc?) disaster!

view more: ‹ prev next ›