this post was submitted on 16 Jan 2024
382 points (93.8% liked)
Games
16800 readers
680 users here now
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
- News oriented content (general reviews, previews or retrospectives allowed).
- Broad discussion posts (preferably not only about a specific game).
- No humor/memes etc..
- No affiliate links
- No advertising.
- No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
- No self promotion.
- No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
- No politics.
Comments.
- No personal attacks.
- Obey instance rules.
- No low effort comments(one or two words, emoji etc..)
- Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
When you see ai-related stories just remember: we're currently living through what, in another 10 or 20 years, will be remembered as the takeoff of AI. Wherever it goes, either heavily regulated or widespread, AI is only going to get exponentially better and it won't just be artists crowing about losing their jobs to it.
Not necessarily. Generative AI hasn't been advancing as much as people claim, and we are getting into the "diminishing returns" phase of AI advancement. If not, we need to switch gears in our anti-AI activism
It's all about the models and training, though. People thinking ChatGPT 3.5/4 can write their legal papers get tripped up because it confabulates ('hallucinates') when it isn't thoroughly trained on a subject. If you fed every legal case for the past 150 years into a model, it would be very effective.
We don't know it would be effective.
It would write legalese well, it would recall important cases too, but we don't know that more data equates to being good at the task.
As an example ChatGPT 4 can't alphabetize an arbitrary string of text.
It doesn't understand the task. It mathematically cannot do this task. No amount of training can allow it to perform this task with the current LLM infrastructure.
We can't assume it has real intelligence, we can't assume that all tasks can be performed or internally represented, and we can't assume that more data equals clearly better results.
That’s a matter of working on the prompt interpreter.
For what I was saying, there’s no assumption: models trained on more data and more specific data can definitely do the usual information summary tasks more accurately. This is already being used to create specialized models for legal, programming and accounting.
You're right about information summary, and the models are getting better at that.
I guess my point is just be careful. We assume a lot about AI's abilities and it's objectively very impressive, but some fundamental things will always be hard or impossible for it until we discover new architectures.
I agree that while it’s powerful and the capabilities are novel, it’s more limited than many think. Some people believe current “ai” systems/models can do just anything, like legal briefs or entire working programs in any language.The truth and accuracy flaws necessitate some serious rethinking. There are, like your above example, major flaws when you try to do something like simple arithmetic, since the system is not really thinking about it.