this post was submitted on 05 Jul 2024
1091 points (95.5% liked)

Memes

45727 readers
827 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] daniskarma@lemmy.dbzer0.com 112 points 4 months ago (7 children)

So the problem isn't the technology. The problem is unethical big corporations.

[–] NuraShiny@hexbear.net 18 points 4 months ago (2 children)

Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

All it can do now and ever will do is destroy the environment by using oodles of energy, just so some fucker can generate a boring big titty goth pinup with weird hands and weirder feet. Feeding it exponentially more energy will do what? Reduce the amount of fingers and the foot weirdness? Great. That is so worth squandering our dwindling resources to.

[–] PM_ME_VINTAGE_30S@lemmy.sdf.org 15 points 4 months ago (1 children)

Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

We definitely don't need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to "nearby but different, enough" tasks. And once they're trained (and possibly quantized), they (LLMs and reinforcement learning policies) don't require that much more power to implement compared to traditional algorithms. So IMO, the question should be "is it worthwhile to spend the energy to train X thing?" Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.

For a person without access to big computing resources (me lol), there's also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on "normal" computers with FOSS tools.

all it does is remix a huge field of data without even knowing what that data functionally says.

IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.

IMO I'm waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists' newest plaything.

Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don't understand these tools and use them against the powerful.

[–] NuraShiny@hexbear.net 1 points 4 months ago (3 children)

Since I don't feel like arguing, I will grant you that you are correct in what you say AI can do. I am not really but whatever, say it can:

How will these reasonable AI tools emerge out of this under capitalism? And how is it not all still just theft with extra steps that is imoral to use?

load more comments (3 replies)
[–] daniskarma@lemmy.dbzer0.com 3 points 4 months ago (7 children)

Idk. I find it a great coding help. IMO AI tech have legitimate good uses.

Image generation have algo great uses without falling into porn. It ables to people who don't know how to paint to do some art.

load more comments (7 replies)
[–] pyre@lemmy.world 10 points 4 months ago (4 children)

depends. for "AI" "art" the problem is both terms are lies. there is no intelligence and there is no art.

[–] lauha@lemmy.one 7 points 4 months ago (2 children)
[–] oatscoop@midwest.social 4 points 4 months ago* (last edited 4 months ago) (7 children)

Any work made to convey a concept and/or emotion can be art. I'd throw in "intent", having "deeper meaning", and the context of its creation to distinguish between an accounting spreadsheet and art.

The problem with AI "art" is it's produced by something that isn't sentient and is incapable of original thought. AI doesn't understand intent, context, emotion, or even the most basic concepts behind the prompt or the end result. Its "art" is merely a mashup of ideas stolen from countless works of actual, original art run through an esoteric logic network.

AI can serve as a tool to create art of course, but the further removed from the process a human is the less the end result can truly be considered "art".

load more comments (6 replies)
[–] pyre@lemmy.world 3 points 4 months ago (9 children)

i won't, but art has intent. AI doesn't.

Pollock's paintings are art. a bunch of paint buckets falling on a canvas in an earthquake wouldn't make art, even if it resembled Pollock's paintings. there's no intent behind it. no artist.

[–] AdrianTheFrog@lemmy.world 4 points 4 months ago (1 children)

The intent comes from the person who writes the prompt and selects/refines the most fitting image it makes

load more comments (1 replies)
load more comments (8 replies)
[–] PolandIsAStateOfMind@lemmy.ml 3 points 4 months ago* (last edited 4 months ago) (1 children)

there is no intelligence and there is no art.

People said exact same thing about CGI, and photography before. I wouldn't be surprised if somebody scream "IT'S NOT ART" at Michaelangelo or people carving walls of temples in ancient Egypt.

[–] pyre@lemmy.world 2 points 4 months ago (3 children)

the "people" you're talking about were talking about tools. I'm talking about intent. Just because you compare two arguments that use similar words doesn't mean the arguments are similar.

load more comments (3 replies)
[–] daniskarma@lemmy.dbzer0.com 3 points 4 months ago* (last edited 4 months ago) (13 children)

AI is a tool used by a human. The human using the tools has an intention, wants to create something with it.

It's exactly the same as painting digital art. But instead o moving the mouse around, or copying other images into a collage, you use the AI tool, which can be pretty complex to use to create something beautiful.

Do you know what generative art is? It existed before AI. Surely with your gatekeeping you think that's also no art.

load more comments (13 replies)
load more comments (1 replies)
[–] Umbrias@beehaw.org 6 points 4 months ago (2 children)

Technology is a cultural creation, not a magic box outside of its circumstances. "The problem isn't the technology, it's the creators, users, and perpetuators" is tautological.

And, importantly, the purpose of a system is what it does.

[–] daniskarma@lemmy.dbzer0.com 3 points 4 months ago (7 children)

But not al users of AI are malignant or causing environment damage.

Saying the contrary would be a bad generalization.

I have LLM models running on a n100 chip that have less consumption that the lemmy servers we are writing on right now.

load more comments (7 replies)
[–] areyouevenreal@lemm.ee 1 points 4 months ago* (last edited 4 months ago) (7 children)

Technology is a product of science. The facts science seeks to uncover are fundamental universal truths that aren't subject to human folly. Only how we use that knowledge is subject to human folly. I don't think open source or open weights models are a bad usage of that knowledge. Some of the things corporations do are bad or exploitative uses of that knowledge.

load more comments (7 replies)
[–] HawlSera@lemm.ee 4 points 4 months ago

Always has been

[–] kibiz0r@midwest.social 3 points 4 months ago (1 children)

Considering most new technology these days is merely a distilation of the ethos of the big corporations, how do you distinguish?

[–] daniskarma@lemmy.dbzer0.com 4 points 4 months ago

Not true though.

Current AI generative have its bases in# Frank Rosenblatt and other scientists working mostly in universities.

Big corporations had made an implementation but the science behind it already existed. It was not created by those corporations.

[–] explodicle@sh.itjust.works 2 points 4 months ago

This has been going on since big oil popularized the "carbon footprint". They want us arguing with each other about how useful crypto/AI/whatever are instead of agreeing about pigouvian energy taxes and socialized control of the (already monopolized) grid.