FaceDeer

joined 8 months ago
[–] FaceDeer@fedia.io 32 points 6 months ago* (last edited 6 months ago) (1 children)

Google actually was good, so there's probably some good information in this documentation. If nothing else we can perhaps figure out what "went wrong."

Edit: I've been reading the blog post that appears to be the main person the leak was shared with and there's a lot of in-depth analysis being done there, but I'm not seeing a link to the actual documents. This is a huge article, though, I might be overlooking it.

[–] FaceDeer@fedia.io 159 points 6 months ago (6 children)

That's because this isn't something coming from the AI itself. All the people blaming the AI or calling this a "hallucination" are misunderstanding the cause of the glue pizza thing.

The search result included a web page that suggested using glue. The AI was then told "write a summary of this search result", which it then correctly did.

Gemini operating on its own doesn't have that search result to go on, so no mention of glue.

[–] FaceDeer@fedia.io 0 points 6 months ago (1 children)

It's not, actually. Hallucinations are things that effectively "come out of nowhere", information that was not in the training material or the provided context. In this case Google Overview is presenting information that is indeed in the provided context. These aren't hallucinations, the AI is doing what it's being told to do. The problem is that Google isn't doing a good job of providing it with the right information to summarize.

My suspicion is that since Google is using this AI for all search results it's had to cut back the resources it's providing to each individual call, which means it's only being given a small amount of context to work from. Bing Chat does a much better job, but it's drawing from many more search results and is given the opportunity to say a lot more about them.

[–] FaceDeer@fedia.io -2 points 6 months ago (1 children)

And humans aren't?

[–] FaceDeer@fedia.io 1 points 6 months ago (1 children)

You didn't answer the question. What exactly are they wasting? And what does this have to do with AI at this point, anyway? You jumped in with this NFT thing and I still fail to see the relevance.

[–] FaceDeer@fedia.io 0 points 6 months ago

So if something isn't perfect it's not "useful?"

I use LLMs when programming. Despite their imperfection they save me an enormous amount of time. I can confidently confirm that LLMs are useful from personal direct experience.

[–] FaceDeer@fedia.io 1 points 6 months ago (3 children)

What exactly are they "wasting?" Ethereum switched to proof-of-stake on 15 September 2022. If you are still criticizing NFTs for their environmental impact you're a year and a half out of date.

[–] FaceDeer@fedia.io 0 points 6 months ago (2 children)

It's useful because it does the stuff we want it to do.

You're focusing on a very high level philosophical meaning of "usefulness." I'm focusing on what actually does what I need it to do.

[–] FaceDeer@fedia.io 0 points 6 months ago (5 children)

But NFTs aren't wasteful. They're run on a proof-of-stake blockchain, no big computing power is used to back them. Your point about NFTs is false, I didn't mention NFTs in the first place, I don't see the relevance of any of this.

[–] FaceDeer@fedia.io 0 points 6 months ago (7 children)

I have no idea what point you're trying to make here. The comment I responded to said:

We really need to tax energy used by GPU-burning projects differently. AI training, blockchain, whatever. Such a wasteful endeavour.

And I pointed out that blockchain doesn't use GPUs any more. NFTs weren't even mentioned specifically. Then the thread went further into discussing AI specifically, not even blockchain at that point, and you jumped in to say "people still use nfts". It was almost a non-sequitur.

I'm not saying anything about NFTs. You don't need to jump in and "defend" them.

view more: ‹ prev next ›