admin

joined 1 year ago
[–] admin@lemmy.my-box.dev 2 points 6 months ago (1 children)

They answered this further down - they never tried it themselves.

[–] admin@lemmy.my-box.dev 0 points 6 months ago

I agree it's being overused, just for the sake of it. On the other hand, I think right now we're in the discovery phase - we'll find out out pretty soon what it's good at, and what it isn't, and correct for that. The things that it IS good at will all benefit from it.

Articles like these, cherry picked examples where it gives terribly wrong answers, are great for entertainment, and as a reminder that generated content should not be relied on without critical thinking. But it's not the whole picture, and should not be used to write off the technology itself.

(as a side note, I do have issues with how training data is gathered without consent of its creators, but that's a separate concern from its application)

[–] admin@lemmy.my-box.dev 2 points 6 months ago

That's what I meant by saying you shouldn't use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.

[–] admin@lemmy.my-box.dev 18 points 6 months ago* (last edited 6 months ago) (2 children)

Yeah, I saw. But when I'm stuck on a programming issue, I have a couple of options:

  • ask an LLM that I can explain the issue to, correct my prompt a couple of times when it's getting things wrong, and then press retry a couple of times to get something useful.
  • ask online and wait. Hoping that some day, somebody will come along that has the knowledge and the time to answer.

Sure, LLMs may not be perfect, but not having them as an option is worse, and way slower.

In my experience - even when the code it generates is wrong, it will still send you in the right direction concerning the approach. And if it keeps spewing out nonsense, that's usually an indication that what you want is not possible.

[–] admin@lemmy.my-box.dev 17 points 6 months ago (6 children)

It should not be used to replace programmers. But it can be very useful when used by programmers who know what they're doing. ("do you see any flaws in this code?" / "what could be useful approaches to tackle X, given constraints A, B and C?"). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.

[–] admin@lemmy.my-box.dev 2 points 6 months ago (2 children)

It also works great for book or movie recommendations, and I think a lot of gpu resources are spent on text roleplay.

Or you could, you know, ask it if gasoline is useful for food recipes and then make a clickbait article about how useless LLMs are.

[–] admin@lemmy.my-box.dev 10 points 6 months ago (1 children)

Not before llamas though. They be the most og.

[–] admin@lemmy.my-box.dev 2 points 6 months ago

This guy gets it.

[–] admin@lemmy.my-box.dev 1 points 6 months ago

ChatGPT is not good enough to use as a substitute for (whatever), but can be a useful tool for someone who can quantify its output.

[–] admin@lemmy.my-box.dev 8 points 6 months ago

Thanks, that's a relief!

[–] admin@lemmy.my-box.dev 8 points 6 months ago (2 children)

I think they provide a very reasonable reality check / a bit of reflection. And it sounds like you could use one, if you're surprised that Facebook still exists.

[–] admin@lemmy.my-box.dev 6 points 6 months ago

Same. While Linus is part of the problem for using practices he claims to disagree with, I'd rather be part of the solution by not rewarding it with attention.

view more: ‹ prev next ›