Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
That is a fascinating take on the general reaction to LLMs. Thanks for posting this!
I just tested it on Bing too, for shits and giggles
you can't butter the whole world's bread meaning
The phrase "you can't butter the whole world's bread" means that one cannot have everything
Didn't work for me. A lot of these 'gotcha' AI moments seem to only work for a small percentage of users, before being noticed and fixed. Not including the more frequent examples that are just outright lies, but get upvoted anyway because 'AI bad'
It looks like incognito and adding "meaning AI" really gets it to work just about every time for me
However, "the lost dog can't lay shingles meaning" didn't work with or without "AI", and "the lost dog can't lay tiles meaning" only worked when adding "AI" to the end
So it's a gamble on how gibberish you can make it I guess
I found that trying "some-nonsense-phrase meaning" won't always trigger the idiom interpretation, but you can often change it to something more saying-like.
I also found that trying in incognito mode had better results, so perhaps it's also affected by your settings. Maybe it's regional as well, or based on your search result. And, as AI's non-deterministic, you can't expect it to always work.
Now I'll never know what people mean when they say "those cupcakes won't fill a sauna"!
Tried it. Afraid this didn't happen, and the AI was very clear the phrase is unknown. Maybe I did it wrong or something?
Honestly, I’m kind of impressed it’s able to analyze seemingly random phrases like that. It means its thinking and not just regurgitating facts. Because someday, such a phrase could exist in the future and AI wouldn’t need to wait for it to become mainstream.
It's not thinking. It's just spicy autocomplete; having ingested most of the web, it "knows" that what follows a question about the meaning of a phrase is usually the definition and etymology of that phrase; there aren't many examples online of anyone asking for the definition of a phrase and being told "that doesn't exist, it's not a real thing." So it does some frequency analysis (actually it's probably more correct to say that it is frequency analysis) and decides what the most likely words to come after your question are, based on everything it's been trained on.
But it doesn't actually know or think anything. It just keeps giving you the next expected word until it meets its parameters.
It didn't work for me. Why not?
Worked for me, but I couldn’t include any names or swearing.
One arm hair in the hand is better than two in the bush
I for one will not be putting any gibberish into Google's AI for any reason. I don't find it fun. I find it annoying and have taken steps to avoid it completely on purpose. I don't understand these articles that want to throw shade at AI LLM's by suggesting their viewers go use the LLM's which only helps the companies that own the LLM's.
Like. Yes. We have established that LLM's will give misinformation and create slop because all their data sets are tainted. Do we need to continue to further this nonsense?