sunbeam60

joined 1 year ago
[–] sunbeam60@lemmy.one 2 points 4 months ago (3 children)

I’m not saying humans and LLMs generate language the same way.

I’m not saying humans and LLMs don’t generate language the same way.

I’m saying I don’t know and I haven’t seen clear data/evidence/papers/science to lean one way or the other.

A lot of people seem to believe humans and LLMs don’t generate language the same way. I’m challenging that belief in the absence of data/evidence/papers/science.

[–] sunbeam60@lemmy.one 2 points 4 months ago (2 children)

I mean I have an opinion too; what I’m seeking is evidence.

[–] sunbeam60@lemmy.one 2 points 4 months ago (5 children)

I mean I have an opinion too; what I’m seeking is evidence.

[–] sunbeam60@lemmy.one 2 points 4 months ago* (last edited 4 months ago) (1 children)

In this case I think it’s the DMA they’re butthurt about.

[–] sunbeam60@lemmy.one 2 points 4 months ago (1 children)

May I introduce you to our friend and saviour, the GDPR regulations?

[–] sunbeam60@lemmy.one -2 points 4 months ago (7 children)

I think I know enough about these concepts to know that there isn’t any conclusive proof, observed in output or system state, to establish consensus that human speech output is generated differently to how LLMs generate output. If you have links to any papers that claim otherwise, I’ll be happy to read them.

[–] sunbeam60@lemmy.one 1 points 4 months ago* (last edited 4 months ago) (4 children)

Well, brains are a network of neurons (we can evidentially verify this) trained on … eyes, ears, sense of touch, taste, smell and balance (rewarded by endorphins released by the old brain on certain hardcoded stimuli). LLMs are a network of neurons trained on text and images (rewarded by producing text that mimics input text and some reasoning tests).

It’s not given that this results in the same way of dealing with language, given the wider set of input data for a human, but it’s not given that it doesn’t either.

[–] sunbeam60@lemmy.one -3 points 4 months ago (15 children)

The article makes the valid argument that LLMs simply predict next letters based on training and query.

But is that actually true of latest models from OpenAI, Claude etc?

And even if it is true, what solid proof do we have that humans aren’t doing the same? I’ve met endless people who could waffle for hours without seeming to do any reasoning.

[–] sunbeam60@lemmy.one 4 points 4 months ago (1 children)

Yes I get your point. Some software can run without a large income stream, on a volunteer basis.

You’re using that fact to say that Firefox also can. And if you care to look at my profile you’ll see I’ve argued time and time again that Mozilla is an overblown organisation and should be slimmed down to a couple of hundred, working solely on the browser.

I doubt, however, that you can build a modern, up-to-date browser on a volunteer basis.

How many full-time people do you think it takes?

[–] sunbeam60@lemmy.one 6 points 4 months ago

Each to their own; may I suggest our friend and saviour Google Chrome? 🤣

[–] sunbeam60@lemmy.one -2 points 4 months ago (3 children)

What do you want? A Mozilla with no income? Because then there is no libre browser.

[–] sunbeam60@lemmy.one 3 points 4 months ago (1 children)

Which should tell you a lot; if Mozilla wasn’t confident about their anonymisation efforts their lawyers would not have allowed checked-by-default.

view more: ‹ prev next ›