this post was submitted on 30 May 2024
48 points (87.5% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] sentient_loom@sh.itjust.works 33 points 5 months ago (1 children)

Why would anyone think it would work? It's a stupid idea.

[–] palordrolap@kbin.social 27 points 5 months ago* (last edited 5 months ago) (1 children)

It's not about whether it works, it's about proving that they're keeping pace with the trends in technology that they're not directly driving.

They're afraid that if they don't give that impression, their stockholders will pull their money and give it to someone who does, and since that's what their stockholders also fear about all the other stockholders, that's what will happen.

AI funding is so far up it's own backside I'm not sure they'll hear the cry of the small child pointing out that this Emperor has no clothes.

[–] sentient_loom@sh.itjust.works 9 points 5 months ago (3 children)

That sounds right. But it makes no practical sense. Everybody relies on Google search. That's a huge part of what makes them powerful. They shouldn't screw with it, and that's not a moral statement about what they owe to users, it's just about self-interest. Ruining your own base product is idiotic.

[–] Codilingus@sh.itjust.works 6 points 5 months ago

Tell that to Google before they replaced the long standing head of its search engine, with the head of advertising.

[–] palordrolap@kbin.social 5 points 5 months ago

I never said that the way they've gone about it is the best way to have gone about it.

Frankly, I'm not even sure what that would be, only that this ain't it.

[–] hellothere@sh.itjust.works 3 points 5 months ago

You're presuming self interest is inherently rational.

It isn't.

[–] paddirn@lemmy.world 14 points 5 months ago (1 children)

It seems like such a weird thing to marry up with internet searching. This method where the algorithms can & will “hallucinate” and just make shit up vs finding very specific information that a person is searching for. Why ever trust these LLMs with facts? These things should’ve only ever been marketed for creative writing and art, not shit like writing legal briefs and school papers and such.

[–] sudoreboot@slrpnk.net 1 points 5 months ago* (last edited 5 months ago) (1 children)

Maybe I can share some insight into why one might want to.

I hate searching the internet. It's a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it's my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.

I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.

And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone's idiosyncratic, artistic ideas for how to organise and present data, or how to keep me 'engaged'.

So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.

And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it's irresponsible to embed one the way google has.

I think it's probably best to.. uhh.. sort of gatekeep this tech so that it's mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.

[–] CrayonMaster@midwest.social 2 points 5 months ago (1 children)

What are you searching for? I can't remember the last time I googled something and most the results were malicious.

Also, I don't think it'll be easier to spot bullshit coming from an LLM then a website.

[–] sudoreboot@slrpnk.net 1 points 5 months ago* (last edited 5 months ago) (1 children)

I don't know about google because I don't use it unless I really can't find what I'm looking for, but here's a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):

In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.

Here is the same search using perplexity's default model (not pro, which is a lot better at breaking down queries and including relevant references):

and I don't have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.

I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I'm not saying there aren't pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.

Edit: I didn't answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:

  • AI generated articles
  • irrelevant SEO results
  • ads/sponsored results/commercial products or services
  • blog spam by people who speak out of ignorance
  • flame bait
  • deliberate disinformation
  • low-quality journalism
  • websites designed to exploit people/optimised for purposes other than to contribute to a healthy internet

etc.

[–] Arkive@lemmy.zip 2 points 5 months ago* (last edited 5 months ago)

Thanks for your post. You've actually somewhat brought me around on AI search with your perplexity example. My previous AI search experiences have been general LLMs like ChatGPT (Opaque source data means I have to verify with traditional web search anyways) and Google's new AI search feature (I'm uncomfortable with google discouraging traffic to the broader web). Since perplexity actually shows and links its sources, I'm going to give a try for the next few days alongside my usual DDG searches.

I would be interested if you have an example of a search with mostly malicious results, since your stated experience seems disproportionate to my own. While I do concur that some results/websites are antagonistic towards my goal of useful information, I'm quite surprised to see someone say that they hate visiting websites in general. (Perhaps I'm missing hyperbole?)

A bit of a digression, but it amused me to see you say you struggle to word your query for search engines, because I've typically had more problems wording my query for LLMs. I wonder if this is could be attributed to communication preferences, or just due to me having used search engines for almost 2 decades.

[–] Rolando@lemmy.world 11 points 5 months ago (1 children)

This article was a lot of words with very little information.

[–] KidnappedByKitties@lemm.ee 5 points 5 months ago

Written with ChatGPT no doubt

[–] sik0fewl@lemmy.ca 8 points 5 months ago

Because AI doesn't exist.