naevaTheRat

joined 1 year ago
[–] naevaTheRat@lemmy.dbzer0.com 3 points 9 months ago

Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

  • We've just got all the low hangingfruit
  • Science education isn't available to many people, perspectives are quite limited consequently.
  • power structures are exploitative and ossified, driving away many people
  • industry has too much influence, there isn't much appetite to fund blue sky projects without obvious short term money earning applications
  • patents slow progress
  • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
  • nobody wants to pay scientists, bright people end up elsewhere
[–] naevaTheRat@lemmy.dbzer0.com 3 points 9 months ago (6 children)

Is it? This seems like a big citation needed moment.

Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven't seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

We've had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I'm not sure why LLMs would be? I actually worry they'll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

Why do you think LLMs will revolutionise science?

[–] naevaTheRat@lemmy.dbzer0.com 2 points 9 months ago* (last edited 9 months ago) (4 children)

I think it's really important to keep in mind the separation between doing a task and producing something which looks like the output of a task when talking about these things. The reason being that their output is tremendously convincing regardless of its accuracy, and given that writing text is something we only see human minds do it's so easy to ascribe intent behind the emission of the model that we have no reason to believe is there.

Amazingly it turns out that often merely producing something which looks like the output of a task apparently accidentally accomplishes the task on the way. I have no idea why merely predicting the next plausible word can mean that the model emits something similar to what I would write down if I tried to summarise an article! That's fascinating! but because it isn't actually setting out to do that there's no guarantee it did that and if I don't check the output will be indistinguishable to me because that's what the models are built to do above all else.

So I think that's why we to keep them in closed loops with person -> model -> person, and explaining why and intuiting if a particularly application is potentially dangerous or not is hard if we don't maintain a clear separation between the different processes driving human vs llm text output.

[–] naevaTheRat@lemmy.dbzer0.com 6 points 9 months ago

Sure but it's not like networks get anything from piracy so they have to content themselves with some rather than infinity. Especially for old content, it's just not worth much individually. There's also a looooot of massively overpaid and wasteful people involved in the major networks.

I know it's not just Netflix but you know, poetic licence or something. also I don't really give a shit about being fair to multibillion dollar corporations that do basically nothing pro social :p

[–] naevaTheRat@lemmy.dbzer0.com 13 points 9 months ago (8 children)

Umm penicillin? anaesthetic? the Haber process? the transistor? the microscope? steel?

I get it, the models are new and a bit exciting but GPT wont make it so you can survive surgery, or make rocks take the jobs of computers.

[–] naevaTheRat@lemmy.dbzer0.com 52 points 9 months ago (5 children)

Netflix Buddy, friend, matey. If I have to pop open Google to find where I can watch something, find the best offers on pricing, and how to circumvent ads or whatever, or how to get Netflix to run on my devices without installing invasive crap or derooting my phone etc, and it's actually quite expensive.

I'll just do one search and not worry about whether I'll have to fight ads, or automatic iffy quality settings, weird compression algorithms, device compatibility etc.

I was happy to hang up the peg leg when I could just VPN to usa and watch everything for the price of a lunch a month. I like simplicity, I enjoyed your more arty shows. It was you who changed the deal Netflix, not I. you decided being insanely profitable wasn't enough and you needed infinite growth.

[–] naevaTheRat@lemmy.dbzer0.com 2 points 9 months ago (6 children)

No, they can summarise articles very convincingly! Big difference.

They have no model of what's important, or truth. Most of the time they probably do ok but unless you go read the article you'll never know if they left out something critical, hallucinated details, or inverted the truth or falsity of something.

That's the problem, they're not an intern they don't have a human mind. They recognise patterns in articles and patterns in summaries, they non deterministically adjust the patterns in the article towards the patterns in summaries of articles. Do you see the problem? They produce stuff that looks very much like an article summary but do not summarise, there is no intent, no guarantee of truth, in fact no concern for truth at all except what incidentally falls out of the statistical probability wells.

[–] naevaTheRat@lemmy.dbzer0.com 3 points 9 months ago (8 children)

That's my point. OP doesn't know the maths, has probably never implemented any sort of ML, and is smugly confident that people pointing out the flaws in a system generating one token at a time are just parroting some line.

These tools are excellent at manipulating text (factoring in the biases they have, I wouldn't recommended trying to use one in a multinational corporation in internal communications for example, as they'll clobber non euro derived culture) where the user controls both input and output.

Help me summarise my report, draft an abstract for my paper, remove jargon from my email, rewrite my email in the form of a numbered question list, analyse my tone here, write 5 similar versions of this action scene I drafted to help me refine it. All excellent.

Teach me something I don't know (e.g. summarise article, answer question etc?) disaster!

[–] naevaTheRat@lemmy.dbzer0.com 0 points 9 months ago (14 children)

So super informed OP, tell me how they work. technically, not CEO press release speak. explain the theory.

[–] naevaTheRat@lemmy.dbzer0.com 2 points 9 months ago* (last edited 9 months ago)

Nah dude, every time I've checked my routing the rtt is basically speed of light. Like it's 200 ms give or take to LA. As the Crow flies that's 24000 km, which would be 80 ms RTT. idk the exact route but we can probably add say 30% in the distance cause those cables aren't dead straight and there's a bit of waggle around the actual network infrastructure.

like yeah maybe half is the processing but that fraction only gets smaller with distance and LA is like the closest English speaking hub.

edit: just ran a test now https://www.meter.net/ping-test/202404-92320-2f35.html that's theoretically 90 ms so yeah, even if that distance is accurate it's 60% light speed limits.

[–] naevaTheRat@lemmy.dbzer0.com 5 points 9 months ago

You might be interested in this: https://veloren.net/

Imagine breaking so many hearts with one hang glider trailer that you spawn an open source mmo.

Castle story was the other big horror show.

[–] naevaTheRat@lemmy.dbzer0.com 5 points 10 months ago (1 children)

otoh depressive realism is a thing.

For example about a trillion probable sentients are killed every year mostly for pleasure and if you can contextualise numbers at all that rends your heart.

view more: ‹ prev next ›