Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!
This is a salient point that's well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It's super easy to call out a bad research study and have it retracted. But you can't just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they're synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.
I'll bait. Let's think:
-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it
-
now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output
-
now llm is asked about the topic and computes the answer string
By definition that answer string can contain all the probably-wrong things without proper indicators ("might", "under such and such circumstances" etc)
If you want to say 40% wrong llm means 40% wrong sources, prove me wrong
It's more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that's how you want to spend your time, hey knock yourself out.
What the fuck is vibe coding... Whatever it is I hate it already.
Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup "Eureka Labs - we are building a new kind of school that is AI native") make a tweet defining the term:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
People ignore the "It's not too bad for throwaway weekend projects", and try to use this style of coding to create "production-grade" code... Lets just say it's not going well.
source (xcancel link)
Using AI to hack together code without truly understanding what your doing
people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI
Preying on the vulnerable is a feature, not a bug.
That was clear from GPT-3, day 1.
I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...
Ugh, that hit me hard. Poor lady. I hope it helped in some way.
I kind of see it more as a sign of utter desperation on the human's part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow's experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.
And the amount of connections and friends the average person has has been in free fall for decades...
I dunno. I connected with more people on reddit and Twitter than irl tbh.
Different connection but real and valid nonetheless.
I'm thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.
At least you're not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.
These same people would be dating a body pillow or trying to marry a video game character.
The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.
You labeling all lonely people losers is part of the problem
If you are dating a body pillow, I think that's a pretty good sign that you have taken a wrong turn in life.
TIL becoming dependent on a tool you frequently use is "something bizarre" - not the ordinary, unsurprising result you would expect with common sense.
If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.
That is peak clickbait, bravo.
But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it's obviously robotic answers?
But then there's people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.
If you're also dumb, chatgpt seems like a super genius.
chatbots and ai are just dumber 1990s search engines.
I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I'm pretty sure I can get better answers from the LLMs tied to Kagi search.
AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a "search" engine like Google now).
Don't be the product.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.
LLM is a subset of ML, which is a subset of AI.
I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.
At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.
Do you guys remember when internet was the thing and everybody was like: "Look, those dumb fucks just putting everything online" and now is: "Look at this weird motherfucker that don't post anything online"
I remember when internet was a place
Remember when people used to say and believe "Don't believe everything you read on the internet?"
I miss those days.