ew, what the hell ?
Games

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Rules
1. Submissions have to be related to games
Video games, tabletop, or otherwise. Posts not related to games will be deleted.
This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.
2. No bigotry or harassment, be civil
No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.
We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.
3. No excessive self-promotion
Try to keep it to 10% self-promotion / 90% other stuff in your post history.
This is to prevent people from posting for the sole purpose of promoting their own website or social media account.
4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts
This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.
We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.
5. Mark Spoilers and NSFW
Make sure to mark your stuff or it may be removed.
No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.
6. No linking to piracy
Don't share it here, there are other places to find it. Discussion of piracy is fine.
We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.
Authorized Regular Threads
Related communities
PM a mod to add your own
Video games
Generic
- !gaming@Lemmy.world: Our sister community, focused on PC and console gaming. Meme are allowed.
- !photomode@feddit.uk: For all your screenshots needs, to share your love for games graphics.
- !vgmusic@lemmy.world: A community to share your love for video games music
Help and suggestions
By platform
By type
- !AutomationGames@lemmy.zip
- !Incremental_Games@incremental.social
- !LifeSimulation@lemmy.world
- !CityBuilders@sh.itjust.works
- !CozyGames@Lemmy.world
- !CRPG@lemmy.world
- !horror_games@piefed.world
- !OtomeGames@ani.social
- !Shmups@lemmus.org
- !space_games@piefed.world
- !strategy_games@piefed.world
- !turnbasedstrategy@piefed.world
- !tycoon@lemmy.world
- !VisualNovels@ani.social
By games
- !Baldurs_Gate_3@lemmy.world
- !Cities_Skylines@lemmy.world
- !CassetteBeasts@Lemmy.world
- !Fallout@lemmy.world
- !FinalFantasyXIV@lemmy.world
- !Minecraft@Lemmy.world
- !NoMansSky@lemmy.world
- !Palia@Lemmy.world
- !Pokemon@lemm.ee
- !Silksong@indie-ver.se
- !Skyrim@lemmy.world
- !StardewValley@lemm.ee
- !Subnautica2@Lemmy.world
- !WorkersAndResources@lemmy.world
Language specific
- !JeuxVideo@jlai.lu: French
If he'd just forgone that last paragraph...
Im.not against the usage of AI in general. The problem only comes up if the human literally relies on it, but if you are using it for learning, quickly scrolling documentation or make code in a critical manner and with years of normal programming experience, that's fine. Bro had 30 years of development experience so I guess he knows what good code looks like
Even then, it feels dishonest to hide when such a historically unreliable tool is being used.
It is unreliable if unsupervised of course. Microsoft and all those big corpos are vibecoding the whole thing, that is the reason why AI has gotten a bad reputation in the community despite it being objectively useful. Using AI to code ≠ Vibecoding
Yeah, this is actually one of the good things a technology like this can do.
He's dead right, in terms of slop, if it's someone with training and experience using a tool, it doesn't matter if that tool is vim or claude. It ain't slop if it's built right.
It ain't slop if it's built right.
Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn't make a mistake then how is it a useful product?
He says it helps him get work done he wouldn't otherwise do, but how's that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain't matching on this one.
the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
When was the last time you coded something perfectly? "If I have to nanny you to make sure you don't make a mistake, then how are you a useful employee?" See how that doesn't make sense. There's a reason why good development shops live on the backs of their code reviews and review practices.
The math ain’t matching on this one.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
There's also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there's code there. I still have to review it, point out some mistakes, and then go back and refill my drink.
And there's so much you can customize with personal rules. Don't like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.
All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn't believe how many people can't rubberduck and explain proper concepts to people, much less LLMs.
LLMs are patient. They don't give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.
Well, I'm not a code monkey, between dyslexia and an aging brain. But if it's anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don't really have to pore over every single line. Only time that's needed is when something is broken. Otherwise, you're scanning to keep oversight, which is no different than reviewing a human's code that you didn't write.
Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren't logical by nature.
If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it'll get that review even if the project maintainers slip up.
And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.
Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).
My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.
Maybe I'm too far behind the various languages, but I really can't see it being a massively harder proposition to scan and edit the output of an llm.
If he's using like an IDE and not vibe coding then I don't have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn't even write this comment I just wrote without asking AI for assistance.
Yeah, that's my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.
But that was pretty much always true. We still did not slap another implementation onto the side, because it's horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
And it's horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.
And the worst part is that they don't even have an answer to those concerns. They know that it's going to bite us into the ass in the near future. They're on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.
And when it comes to actually maintaining that generated code, they'll be the hardest to motivate, because that isn't as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don't know any better how it actually works. Nevermind that they're also less sharp in general, because they've outsourced thinking.
Hell most people turn off their brains when the word gets mentioned at all. There's plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.
As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.
The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it's not fit for.
Lutris doesn't have that problem.
So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it's likely to improve as he off loads tedious small things to his more efficient tools.
Somehow hiding the code feels worse than using the code. This whole thing is yuck.
Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it's true or not.
I blame fucking no one for hiding the fact.
This is on the users not the dev. The users are fucking animals and created this very problem.
Blaming the wrong people and attacking them is the yuck.
Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.
Yeah, management wants us to use AI at $DAYJOB and one of the strategies we've considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.
Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.
We haven't actually started doing these separate commits, because it's cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.
I think the simple fact is that some of the people in this thread don't understand is that the people they're asking to vet the code don't know how.
They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don't know that most of the people in opposition to their comments understand that context.
I haven't coded anything since the 90's. I know HTML and basic CSS and that's it. I wouldn't have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I'd still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn't know where to start to make a program. It's not part of my skill set.
Most users are like that. They engage with only parts of a thing. It's why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.
It'd be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn't retract. A lot of people wouldn't know where to start.
I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.
But the way I see it there's two different groups and they have very different views of this situation.
The people who can't code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.
If those people choose to boycott, it'll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.
The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there's something wrong.
I suppose there's a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.
Humans certainly aren't infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren't going to make something up unless they have an ulterior motive.
Perhaps breaking things down into tiny chunks makes AI better or it's outputs more usable. Maybe there's a 'sweet spot".
But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that's dangerous for many reasons.
This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.
I know that from experience. So in this case does the AI have more potential to help or do harm?
There's a lot to this. I have not personally used Lutris, but before this happened I wouldn't have thought twice about saying that I've heard good things about it if someone asked me for a Heroic launcher style software for Linux.
But just like the Ladybird fork of Firefox I don't know that I feel comfortable suggesting it if this is the state of things. For the same reason I don't currently feel comfortable recommending Windows 11 or Chrome.
There are so many sensitive things that OS's, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn't swimming in a big pond of sensitive information but it is running on people's hardware and they should have both the right to be informed and the right to choose.
Every extra person using all these AI tools is only adding to the issue.
No, literally the opposite. They are going to do this until it is not financially viable. The more frugal and conscientious people are with their AI, the longer it is financially viable. If you want to pop the bubble, go set up a bot to hammer their free systems with bogus prompts. Run up their bills until they can't afford to be speculative any more.
you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don't harass the developer.
AI is immeasurably shitty, both in terms of code quality and of morality. The fact that this developer is hiding his use of it from his community is despicable. I will never use Lutris again, nor will I allow PRs from this developer on any repos of mine. Fuck AI, and fuck strycore (deceitful bastard and Lutris "developer").