this post was submitted on 20 Jun 2024
471 points (89.7% liked)

Technology

59495 readers
3081 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

top 50 comments
sorted by: hot top controversial new old
[–] IHeartBadCode@kbin.run 126 points 5 months ago (13 children)

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[–] 0x0@programming.dev 45 points 5 months ago (2 children)

I use them like wikipedia: it's a good starting point and that's it (and this comparison is a disservice to wikipedia).

[–] SandbagTiara2816@lemmy.dbzer0.com 11 points 5 months ago (3 children)

Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries

load more comments (3 replies)
load more comments (1 replies)
[–] sugar_in_your_tea@sh.itjust.works 44 points 5 months ago (9 children)

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[–] deweydecibel@lemmy.world 29 points 5 months ago* (last edited 5 months ago) (3 children)

I've seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don't need to.

And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn't making them dumber. They were already dumb. What the AI does is make code they don't understand more accessible, which is to say, it's just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

[–] AdamBomb@lemmy.sdf.org 10 points 5 months ago

Spot on description

Yup. And I'm unwilling to be the QC in a coding assembly line, I want competent peers who catch things before I do.

But my point isn't that AI actively makes individuals dumber, it's making people in general dumber. I believe that to be true about a lot of technology. In the 80s, people were familiar with command-line interfaces, and jumping to some coding wasn't a huge leap, but today, people can't figure out how to do a thing unless there's an app for it. AI is just the next step along that path, soon, even traditionally competent industries will be little more than QC and nobody will remember how the sausage is made.

If they can demonstrate that they know how the sausage is made and how to inspect a sausage of packages, I'm fine with it. But if they struggle to even open the sausage package, we're going to have problems.

load more comments (1 replies)
load more comments (7 replies)
[–] Zikeji@programming.dev 30 points 5 months ago (2 children)

Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

So it's helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it's not going to do the job itself.

[–] deweydecibel@lemmy.world 16 points 5 months ago* (last edited 5 months ago)

So it's helpful for saving time typing some stuff

Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it'll do the bulk of it, and then I just go through and edit out all little mistakes.

And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they're leaving out the part where they have to edit the output too.

At the end of the day, we've had this technology for a while, it's just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it's just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

load more comments (1 replies)
load more comments (10 replies)
[–] Spesknight@lemmy.world 77 points 5 months ago (2 children)

I don't fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

[–] bionicjoey@lemmy.ca 42 points 5 months ago (1 children)

I know AI can't replace me. But my boss's boss's boss doesn't know that.

[–] sugar_in_your_tea@sh.itjust.works 22 points 5 months ago (7 children)

Fortunately, it's my job as your boss to convince my boss and boss' boss that AI can't replace you.

We had a candidate spectacularly fail an interview when they used AI and didn't catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss's boss decide AI is the way to go.

I hope your actual boss would do that for you.

load more comments (7 replies)
[–] CosmoNova@lemmy.world 9 points 5 months ago

Worst part is some of them aren‘t even idiots, just selfish and reckless. They don‘t care if the company still exists in a year so as long as they can make millions driving it into the ground.

[–] deweydecibel@lemmy.world 59 points 5 months ago* (last edited 5 months ago) (1 children)

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

This aspect of it isn't getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn't devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there's actual people working round the clock pruning and curating it.

You know, humans, with actual human intelligence, without which these miraculous "artificial intelligence" tools would not work as they seem to.

If the "AI' needs a human support team to keep it "intelligent", it's less AI and more a really fancy kind of puppet.

[–] 0x0@programming.dev 17 points 5 months ago

I don't think the author was referring to people pruning AI data but rather to mechanical turk instances like recently happened with Amazon.

[–] KingThrillgore@lemmy.ml 59 points 5 months ago* (last edited 5 months ago) (11 children)

Hacker News was silencing this article outright. That's typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.

If this is satire, I don't see it. Because i've seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.

[–] xavier666@lemm.ee 12 points 5 months ago (2 children)

What happened to HN? I have now heard HN silencing cetain posts multiple times. Is this enshittification?

[–] KingThrillgore@lemmy.ml 34 points 5 months ago (2 children)

HN is run by a VC firm, Y Combinator. One of its largest supporters is OpenAI CEO Sam Altman. Do the math.

load more comments (2 replies)
load more comments (1 replies)
[–] Alphane_Moon@lemmy.world 11 points 5 months ago

Fascinating, I am not surprised at all.

Even beyond AI, some of the implicit messaging has got to strike a nerve with that kind of crowd.

I don't think this is satire either, more like a playful rant (as opposed to a formal critique).

load more comments (9 replies)
[–] rimu@piefed.social 58 points 5 months ago

I will learn enough judo to throw you into the sun

best line

[–] EnderMB@lemmy.world 57 points 5 months ago* (last edited 5 months ago) (2 children)

I work in AI as a software engineer. Many of my peers have PhD's, and have sunk a lot of research into their field. I know probably more than the average techie, but in the grand scheme of things I know fuck all. Hell, if you were to ask the scientists I work with if they "know AI" they'll probably just say "yeah, a little".

Working in AI has exposed me to so much bullshit, whether it's job offers for obvious scams that'll never work, or for "visionaries" that work for consultancies that know as little about AI as the next person, but market themselves as AI experts. One guy had the fucking cheek to send me a message on LinkedIn to say "I see you work in AI, I'm hosting a webinar, maybe you'll learn something".

Don't get me wrong, there's a lot of cool stuff out there, and some companies are doing some legitimately cool stuff, but the actual use-cases for these tools where they won't just be productivity enhancers/tools is low at best. I fully support this guy's efforts to piledrive people, and will gladly lend him my sword.

[–] alphacyberranger@sh.itjust.works 18 points 5 months ago

Same here. I have seen and worked in entire projects based on AI bullshit.

load more comments (1 replies)
[–] Rumbelows@lemmy.world 50 points 5 months ago (2 children)

I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally

[–] Eccitaze@yiffit.net 44 points 5 months ago

Yeah, that's what happens when the LLM they use to summarize these articles strips all nuance and comedy.

[–] amio@kbin.run 14 points 5 months ago

Even for the internet, this place is truly extremely fond of doing that.

[–] madsen@lemmy.world 43 points 5 months ago (13 children)

This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

load more comments (13 replies)
[–] tron@midwest.social 41 points 5 months ago (9 children)

Oh my god this whole post is amazing, thought I'd share my favorite excerpt:

This entire class of person is, to put it simply, abhorrent to right-thinking people. They're an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they've learned their lesson, a prison I'm fundraising for. Every morning, a figure in a dark hood7, whose voice rasps like the etching of a tombstone, spends sixty minutes giving a TedX talk to the jailed managers about how the institution is revolutionizing corporal punishment, and then reveals that the innovation is, as it has been every day, kicking you in the stomach very hard.

Where the fuck do I donate???????

load more comments (9 replies)
[–] dumples@midwest.social 37 points 5 months ago (2 children)

I've been a professional data scientist for 5+ years and I'm okay at my job. Good enough to get 3 different jobs at non FAANG companies and I have already 3 or so hype trains and name changes of what words we use for the same tools and techniques. This AI hype is going to be another one of these with a few niche cases.

Most of my job is insisting on doing something correctly and then being told that doesn't give the "correct" response based on leadership expectations. I just change what I do until I get the results that people want to see. I'll just ride this hype wave out here for a few years here learning nothing new again. I'll find another job based on my experience and requirements gathering to start the cycle again. Maybe I'll get more data engineering skills which are actually valid

[–] funkless_eck@sh.itjust.works 20 points 5 months ago* (last edited 5 months ago) (1 children)

Similarly, my current job (now ending as they want to end remote work and I don't want to move to a desert in a very red/religious area)- I guided them out of "block chain for supply chain" (lmao it's cringe to even say that now) into "AI for productivity automation"

I give it 3 years max before all mentions of AI are scrubbed from the home page

load more comments (1 replies)
[–] morbidcactus@lemmy.ca 12 points 5 months ago (1 children)

I'm a data engineer/architect and it's the same over here, I get asked constantly "how can we stuff AI into this solution?", never "should we consider using AI here? Is there a value?", my view, people don't understand their data and don't want to put in the effort to understand their data and think that it'll magically pull actionable insights from their dataswamp, nothing new, that's been a constant for as long as I recall.

Like I totally understand the draw of new and exciting, but there's so much you can do with traditional analytics, and in my view you really need to have a good foundation before doing anything else.

load more comments (1 replies)
[–] droopy4096@lemmy.ca 30 points 5 months ago

Best AI rant hands-down. I can agree with every word there.

[–] Buffalox@lemmy.world 20 points 5 months ago (2 children)

We need AI because it's convenient to blame for any problems.

[–] widw@ani.social 12 points 5 months ago (1 children)

This is actually more terrifying than you might have intended.

I've long thought that the greatest danger AI poses is going to be the "man behind the curtain" effect. If people can blame everything on AI then AI can be a blanket covering deliberate harm.

Imagine if government starts using AI for decision making. You could easily end up with a "man behind the curtain" who's actually calling all the shots and just pretending it's the AI doing it. Then you'd effectively have a dictatorship where nobody knows/believes they're in a dictatorship.

load more comments (1 replies)
load more comments (1 replies)
[–] Spesknight@lemmy.world 20 points 5 months ago

Hey, we can always say: how can you check if an AI is working, it doesn't come to the office? 🤔

[–] Shadywack@lemmy.world 18 points 5 months ago (1 children)

Using satire to convey a known truth some already understand implicitly, some don't want to acknowledge, some refuse it outright, but when you think about it, we've always known how true it is. It's tongue-in-cheek but it's necessary in order to convince all these AI-washing fuckheads what a gimmick it is to really be making sweeping statements about a chatbot that still can't spell lollipop backwards.

[–] ripcord@lemmy.world 24 points 5 months ago (1 children)

I'm not sure this is satire. A lot of hyperbole, but not satire.

load more comments (1 replies)
[–] elias_griffin@lemmy.world 12 points 5 months ago* (last edited 5 months ago)

This gets a vote from me for "Best of the Internet 2024", brilliant pacing, super braced, and with precision bluntness. I'm going to pretend the Monero remark is not even there, that's how good it was.

[–] mPony@lemmy.world 11 points 5 months ago

you know what, yes, I love this energy and I want more of it. This is how brave people should talk to management, but it's how everyone should talk to AI hucksters.

[–] demonsword@lemmy.world 9 points 5 months ago

very interesting read, thank you

load more comments
view more: next ›