NounsAndWords

joined 1 year ago
[–] NounsAndWords@lemmy.world 15 points 2 months ago

This is what Ilya saw...

[–] NounsAndWords@lemmy.world 11 points 2 months ago

As long as they don't fuck it up in a similar fashion to seemingly every other thing they have tried for a couple decades.

[–] NounsAndWords@lemmy.world 1 points 2 months ago

Assuming it takes its answer from search results, and the search results are all affiliate marketing sites that just want you to click on a link and buy something, this makes perfect sense.

[–] NounsAndWords@lemmy.world 0 points 3 months ago (1 children)

Is language conscious?

Are atoms?

I don't know if LLMs of a large enough size can achieve (or sufficiently emulate) consciousness, but I do know that we barely know anything about consciousness, let alone it's limits.

[–] NounsAndWords@lemmy.world 13 points 3 months ago

The thing is, LLMs can be used for something like this, but just like if you asked a stranger to write a letter for your loved one and only gave them the vaguest amount of information about them or yourself you're going to end up with a really generic letter.

...but to give me amount of info and detail you would need to provide it with, you would probably end up already writing 3/4 of the letter yourself which defeats the purpose of being able to completely ignore and write off those you care about!

[–] NounsAndWords@lemmy.world 15 points 4 months ago (1 children)

They are also implicitly aggressive animals

You're not wrong, and I prefer cats as well...but cats are violent, homicidal monsters and if they were big enough they would absolutely murder the fuck out of you (as soon as they were done toying with you).

[–] NounsAndWords@lemmy.world 12 points 4 months ago (1 children)

You don’t need to self-flagellate about a mistake years ago, for the rest of your life.

Cool, I'll just tell my brain about that.

[–] NounsAndWords@lemmy.world 7 points 4 months ago

"Is that 100 snakes in your pants, or are you just happy to see me?"

[–] NounsAndWords@lemmy.world 38 points 5 months ago (3 children)

I keep forgetting that that's an option

[–] NounsAndWords@lemmy.world 3 points 5 months ago

No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I'm guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they've been heavily trained on. If predictive text doesn't do it then I would be betting on whatever Yann LeCun is working on.

[–] NounsAndWords@lemmy.world 17 points 5 months ago (1 children)

Perhaps there is some line between assuming infinite growth and declaring that this technology that is not quite good enough right now will therefore never be good enough?

Blindly assuming no further technological advancements seems equally as foolish to me as assuming perpetual exponential growth. Ironically, our ability to extrapolate from limited information is a huge part of human intelligence that AI hasn't solved yet.

[–] NounsAndWords@lemmy.world 35 points 6 months ago (24 children)

GPT-2 came out a little more than 5 years ago, it answered 0% of questions accurately and couldn't string a sentence together.

GPT-3 came out a little less than 4 years ago and was kind of a neat party trick, but I'm pretty sure answered ~0% of programming questions correctly.

GPT-4 came out a little less than 2 years ago and can answer 48% of programming questions accurately.

I'm not talking about mortality, or creativity, or good/bad for humanity, but if you don't see a trajectory here, I don't know what to tell you.

view more: next ›