kibiz0r

joined 1 year ago
[–] kibiz0r@midwest.social 2 points 2 months ago

Fake lawyers, fake reviews, and several pyramid schemes. Solid takedowns, FTC!

[–] kibiz0r@midwest.social 24 points 2 months ago

I’ll believe it when GN says it.

[–] kibiz0r@midwest.social 15 points 2 months ago (1 children)

There’s this podcast I used to enjoy (I still enjoy it, but they stopped making new episodes) called Build For Tomorrow (previously known as The Pessimists Archive).

It’s all about times in the past where people have freaked out about stuff changing but it all turned out okay.

After having listened to every single episode — some multiple times — I’ve got this sinking feeling that just mocking the worries of the past misses a few important things.

  1. The paradox of risk management. If you have a valid concern, and we collectively do something to respond to it and prevent the damage, it ends up looking as if you were worried over nothing.
  2. Even for inventions that are, overall, beneficial, they can still bring new bad things with them. You can acknowledge both parts at once. When you invent trains, you also invent train crashes. When you invent electricity, you also invent electrocution. That doesn’t mean you need to reject the whole idea, but you need to respond to the new problems.
  3. There are plenty of cases where we have unleashed horrors onto the world while mocking the objections of the pessimists. Lead, PFAS, CFCs, radium paint, etc.

I’m not so sure that the concerns about AI “killing culture” actually are as overblown as the worry about cursive, or record players, or whatever. The closest comparison we have is probably the printing press. And things got so weird with that so quickly that the government claimed a monopoly on it. This could actually be a problem.

[–] kibiz0r@midwest.social 8 points 2 months ago

If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

[–] kibiz0r@midwest.social 12 points 2 months ago (1 children)

I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.

The theme that I would highlight here though:

More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.

But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.

How will you be ready for that if you don’t get practice along the way?

[–] kibiz0r@midwest.social 21 points 2 months ago (2 children)

Nor is losing your night vision to the glare of a car (it's always a pickup) behind you with too-bright lights that fill your mirrors.

It really fucking is. Nothing is a bigger red flag to me than a pickup. 98% of pickup drivers are assholes.

[–] kibiz0r@midwest.social 10 points 2 months ago (3 children)

Basically this: Flying Too High: AI and Air France Flight 447

Description

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

[–] kibiz0r@midwest.social 21 points 2 months ago (1 children)

Surprising number of people taking this seriously.

[–] kibiz0r@midwest.social 53 points 2 months ago

Don’t worry. Someone will soon come by to remind us that it’s pointless to regulate AI, and also harmful to do it, and it’s actually a good thing for everyone, and also we’ll be shoveling shit until we die if we don’t get on board, and please oh please just let me get off to one more deepfake of my classmate before you take away my toy it’s not faiiiiir.

[–] kibiz0r@midwest.social 2 points 2 months ago

Yeah… What a mess. A horrible, horrible idea.

[–] kibiz0r@midwest.social 1 points 2 months ago (2 children)

Mass producing disguised explosives is risky business.

Obviously they wanna price them low, to attract buyers in the target market. But if you price them too low, they become an opportunity for middlemen to resell to another market.

And now you’ve spread several batches of explosives to who-knows-where.

Hopefully they thought of that and restricted the detonation trigger to specific country codes. But that doesn’t erase the fact that there are explosives in the device.

[–] kibiz0r@midwest.social 66 points 2 months ago

Arguably one of the most important groups to hear from if we’re gonna find the right balance between freedom to create and freedom from harm.

view more: ‹ prev next ›