this post was submitted on 20 Jun 2024
471 points (89.7% liked)

Technology

59534 readers
3143 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

you are viewing a single comment's thread
view the rest of the comments
[–] IHeartBadCode@kbin.run 126 points 5 months ago (7 children)

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[–] 0x0@programming.dev 45 points 5 months ago (2 children)

I use them like wikipedia: it's a good starting point and that's it (and this comparison is a disservice to wikipedia).

[–] SandbagTiara2816@lemmy.dbzer0.com 11 points 5 months ago (2 children)

Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries

[–] deweydecibel@lemmy.world 4 points 5 months ago

I wouldn't even trust it for summaries beyond extremely basic stuff.

[–] ripcord@lemmy.world 4 points 5 months ago (1 children)

Man, I need to build some new shit.

I can't remember the last time I looked at a blank page.

[–] mPony@lemmy.world 2 points 5 months ago

Blank pages are for the young

[–] grrgyle@slrpnk.net 7 points 5 months ago

I agree with your parenthetical, but Wikipedia actually agrees on your main point: Wikipedia itself is not a source of truth.

[–] sugar_in_your_tea@sh.itjust.works 44 points 5 months ago (4 children)

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[–] deweydecibel@lemmy.world 29 points 5 months ago* (last edited 5 months ago) (3 children)

I've seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don't need to.

And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn't making them dumber. They were already dumb. What the AI does is make code they don't understand more accessible, which is to say, it's just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

[–] AdamBomb@lemmy.sdf.org 10 points 5 months ago

Spot on description

Yup. And I'm unwilling to be the QC in a coding assembly line, I want competent peers who catch things before I do.

But my point isn't that AI actively makes individuals dumber, it's making people in general dumber. I believe that to be true about a lot of technology. In the 80s, people were familiar with command-line interfaces, and jumping to some coding wasn't a huge leap, but today, people can't figure out how to do a thing unless there's an app for it. AI is just the next step along that path, soon, even traditionally competent industries will be little more than QC and nobody will remember how the sausage is made.

If they can demonstrate that they know how the sausage is made and how to inspect a sausage of packages, I'm fine with it. But if they struggle to even open the sausage package, we're going to have problems.

[–] conciselyverbose@sh.itjust.works 8 points 5 months ago

Yeah, I honestly don't have any real issue with using it to accelerate your workflow. I think it's hit or miss how much it does, but it's probably slightly stepped up from code completion without "AI".

But if you don't understand every line of code "you" write completely, you're being grossly negligent and begging for a shitshow.

[–] Excrubulent@slrpnk.net 8 points 5 months ago* (last edited 5 months ago) (1 children)

Wait wait wait so... this person forgot the pythagorean theorem?

Like that is the most basic task. It's d = sqrt((x1 - x2)^2 + (y1 - y2)^2), right?

That was off the top of my head, this person didn't understand that? Do I get a job now?

I have seen a lot of programmers talk about how much time it saves them. It's entirely possible it makes them very fast at making garbage code. One thing I've known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it's just creating harder work for you, unless you don't care about getting it right.

[–] sugar_in_your_tea@sh.itjust.works 11 points 5 months ago (1 children)

Yup, you're hired as whatever position you want. :)

Our instructions were basically:

  1. randomly place N coordinates on a 2D grid, and a random target point
  2. report the closest of those N coordinates to the target point

It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (abs(x2 - x1) + abs(x2 - x1)) probably due to other clues in the text, but the instructions were clear. The candidate didn't notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.

Our better candidates remember the equation like you did. But we don't require it, since not all applicants finished college (this one did). We're more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).

[–] frezik@midwest.social 7 points 5 months ago (1 children)

College? Pythagorean Theorem is mid-level high school math.

I did once talk to a high school math teacher about a graphics program I was hacking away on at the time, and she was surprised that I actually use the stuff she teaches. Which is to say that I wouldn't expect most programmers to know it exactly off the top of their head, but I would expect they've been exposed to it and can look it up if needed. I happen to have it pretty well ingrained in my brain.

[–] sugar_in_your_tea@sh.itjust.works 5 points 5 months ago (1 children)

Yes, you learn it in the context of finding the hypotenuse of a triangle, but:

  • a lot of people are "bad" at math (more unconfident), but good with logic
  • geometry, trig, etc require a lot of memorization, so it's easy to forget things
  • interviews are stressful, and good applicants will space on basic things

So when I'm interviewing, I try to provide things like algorithms that they probably know but are likely to space on, and focus on the part I care about: can they reason their way through a problem and produce working code, and then turn around and review their code. Programming is mostly googling stuff (APIs, algorithms, etc), I want to know if they can google the right stuff.

And yeah, we let applicants look stuff up, we just short circuit the less important stuff so they have time to show us the important parts. We dedicate 20-30 min to coding (up to an hour if they rocked at questions and are struggling on code), and we expect a working solution and for them to ask questions about vague requirements. It's a software engineering test, not a math test.

[–] Excrubulent@slrpnk.net 2 points 5 months ago

Yeah, that's absolutely fair, and it's a bit snobby of me to get all up in arms about forgetting a formula - although it is high school level where I live. But to be handed the formula, informed that there's an issue and still not fix it is the really hard part to wrap my head around, given it's such a basic formula.

I guess I'm also remembering someone I knew who got a programming job off the back of someone else's portfolio, who absolutely couldn't program to save their life and revealed that to me in a glaring way when I was trying to help them out. It just makes me think of that study that was done that suggested that there might be a "programmer brain" that you either have or you don't. They ended up costing that company a lot to my knowledge.

[–] xavier666@lemm.ee 4 points 5 months ago (1 children)

I don't want to believe that coders like these exist and are this confident in an AI's ability to code.

My co-worker said told me another story.

His friend was in a programming class, and made it nearly to the end, when he asked my friend for help. Basically, he had already written the solution, but it wasn't working, and he needed help debugging it. My friend looked at the code, and it looked AI generated because there were obvious mistakes throughout, so he asked his friend to walk him through the code, and that's when his friend admitted to AI generating the whole thing. My friend refused to help.

They do exist, but this candidate wasn't that. I think they were just under pressure and didn't know the issue. The red flag for me wasn't AI or not catching the AI issues, it was that when I asked how confident they were about the code (after us catching the same bug twice), they said 100% and they didn't need any extra assurance (I would've wanted to write tests).

[–] Zikeji@programming.dev 30 points 5 months ago (2 children)

Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

So it's helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it's not going to do the job itself.

[–] deweydecibel@lemmy.world 16 points 5 months ago* (last edited 5 months ago)

So it's helpful for saving time typing some stuff

Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it'll do the bulk of it, and then I just go through and edit out all little mistakes.

And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they're leaving out the part where they have to edit the output too.

At the end of the day, we've had this technology for a while, it's just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it's just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

[–] afraid_of_zombies@lemmy.world 4 points 5 months ago

but are good at mimicry.

I know engineers who make over double what I make solely because of that skill.

[–] grrgyle@slrpnk.net 8 points 5 months ago

I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.

Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.

I've definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.

I foresee a real reckoning of unmaintainable codebases in a couple years.

[–] Shadywack@lemmy.world 5 points 5 months ago

Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.