this post was submitted on 22 Jul 2024
232 points (97.9% liked)

Technology

59495 readers
3110 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] Asafum@feddit.nl 98 points 4 months ago (3 children)

I swear the only legitimate response to a corporation saying "I'm going to self regulate"

Is:

"HAHAHAHAHAHAHAHA HAHAHAHAHAHAHA!!! Ok anyway, here are the regulations we're going to enforce."

[–] Alphane_Moon@lemmy.world 27 points 4 months ago

That fact that is not taken as a given, speaks a lot about how deeply ingrained corruption is in our society.

I would almost argue that even "neutral" newswires like AP/Reuters should use language like "Companies A, B, C, have created a common PR organisation that will be focused on self-regulation polemics ...".

[–] pyre@lemmy.world 6 points 4 months ago

yeah how is this even allowed, promise or no promise? why can't individuals do this? "hey, no need to write any laws on poisoning your boss, promise i will self-regulate."

[–] vithigar@lemmy.ca 3 points 4 months ago

Any entity or group that can be trusted to self-regulate wouldn't raise the question about whether they need to be regulated in the first place.

[–] Alwaysnownevernotme@lemmy.world 38 points 4 months ago (1 children)

Awww they promised?

Thats fucking adorable.

The government is supposed to be the triggerman not a supplicant.

[–] homesweethomeMrL@lemmy.world 5 points 4 months ago

But but but . . . They have the money

Also, and more probably, no one in a decision-making capacity in The Government knows how AI works and really doesn’t want to find out.

[–] henfredemars@infosec.pub 29 points 4 months ago (1 children)

Self-regulation? Yeah we got it, right next to the trickle down economics.

[–] 0x0@programming.dev 2 points 4 months ago

I prefer piñata economics myself.

[–] umbrella@lemmy.ml 28 points 4 months ago

wait are we still falling for this age old con?

[–] hoshikarakitaridia@lemmy.world 11 points 4 months ago

It's like a 6 year old saying "I can stop eating candy! I'll do it!" Ad after 5 seconds they eat it anyway.

[–] laxe@lemmy.world 11 points 4 months ago (2 children)

Boeing also self-regulated their safety.

[–] explodicle@sh.itjust.works 4 points 4 months ago

Does it count as self regulation if they just buy regulations to keep out competition?

[–] FiniteBanjo@lemmy.today 3 points 4 months ago

Yes, you're correct, but technically, the FAA regulated Boeing. But the FAA was desperate for qualified workers, and all of their hires worked for Boeing.

[–] randon31415@lemmy.world 9 points 4 months ago

We will self regulate if you give us a guaranteed monopoly!

White house: but open source exists.

Well then no regulation.

[–] Paragone@lemmy.world 5 points 4 months ago

Conflict-of-interest is the root of all corruption.

[–] mox@lemmy.sdf.org 5 points 4 months ago
[–] Sp00kyB00k@lemmy.world 2 points 4 months ago

Like to other SRO's in the financial system. That worked out fine.

[–] autotldr@lemmings.world 2 points 4 months ago

This is the best summary I could come up with:


“We’re grateful for the progress leading companies have made toward fulfilling their voluntary commitments in addition to what is required by the executive order,” says Robyn Patterson, a spokesperson for the White House.

Without comprehensive federal legislation, the best the US can do right now is to demand that companies follow through on these voluntary commitments, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley.

After they signed the commitments, Anthropic, Google, Microsoft, and OpenAI founded the Frontier Model Forum, a nonprofit that aims to facilitate discussions and actions on AI safety and responsibility.

“The natural question is: Does [the technical fix] meaningfully make progress and address the underlying social concerns that motivate why we want to know whether content is machine generated or not?” he adds.

In the past year, the company has pushed out research on deception, jailbreaking, strategies to mitigate discrimination, and emergent capabilities such as models’ ability to tamper with their own code or engage in persuasion.

Meanwhile, Microsoft has used satellite imagery and AI to improve responses to wildfires in Maui and map climate-vulnerable populations, which helps researchers expose risks such as food insecurity, forced migration, and disease.


The original article contains 3,300 words, the summary contains 197 words. Saved 94%. I'm a bot and I'm open source!

[–] j4k3@lemmy.world -4 points 4 months ago (1 children)

Any government authoritarian intervention in AI is a joke. The US has a greatly diluted sense of relevance. AI research is international and restrictions will only make the USA irrelevant. Even the names you think of as American are actually funding advancements coming from other countries, most of them in Germany and Asia. I've read several white papers on current AI research recently, none were from US based schools or academics. The USA is simply not relevant in some imperial nonsense narrative. AI is a vital military technology. Limiting it through Luddite isolationism will reduce the experience pool massively, and push the research further into places where fresh economic growth is happening without regressive stagnation and corruption of toxic consolidation of wealth.

[–] 0x0@programming.dev 3 points 4 months ago (1 children)

You're kinda missing the point... the big american names are claiming AI should be limited because it's so dangerous.

Who should control AI? Them.

[–] j4k3@lemmy.world 1 points 4 months ago

::: spoiler Only proprietary AI pushes for regulatory measures in an attempt at monopoly. That boils down to one man's corruption.

Control is impossible. The technology is open source. Transformers are to the present what Apache is to the internet. There were several proprietary attempts to monopolize web servers too, all of them failed because of the open source alternative.

When one of the old engineers from Bell Labs starts pushing some new open source thing, I pay close attention. The digital age is built on technology with those credentials.

Yann LeCunn is the head of Meta AI and a primary driver of open source AI. He operates independent of Zuck. Meta AI is not trying to monopolize AI, they are attempting to lead, but not monopolize or control.

The authoritarian idea of control is a fallacy. This is not something that can be controlled like that. People don't seem to realize the ultimate scope yet. AI is on par with the entire internet in how it will change society long term.

In a lot of ways, present AI is like the early days of the microprocessor and personal computers. Most people couldn't really see the potential uses of computers with an Apple II running a 6502 variant. It was barely more than a tech novelty. The chip itself was pretty useless until all the peripherals were developed around. At present, AI is kinda messy in the public sphere. All of the tools available publicly are like rough examples only. There are a lot more capabilities that are not present in the libraries most people are using in code. This is the disconnect between publicly facing tools and why corporations appear to make foolish decisions. They are being approached by the AI companies directly for integrated solutions. Tangent aside, the current usefulness of AI may look limited to people that can not see the bigger picture, or smaller if you will. At its core, AI can reliably add a new logic element to Boolean math operations; a contextual logic element with flexibility. No matter the issues at present, this is new math. There are countless unexplored places to make use of this revolutionary new math. Asking who should control this is like telling the world about division and then asking who should control division. Any attempt to do so is draconian nonsense. It is a fundamentally bad argument. Division is a phenomenon of the universe. The large language model is an enormous statistical math problem involving word vectors and imaginary numbers. It has packaged human language and culture into a deterministic math problem. Some humans struggle to understand basic division. Most humans struggle to understand vectors and ranked tensors. Of those that understand the latter very few understand the math behind large language models. This does not change the fundamental issue that this is just a math problem that has been publicly shared. The large corporate models do not mean very much. They are fighting to be the best generalist. They all strive to add the same types of safety alignment, but this means nothing. Anyone can do the math and make their own model. The real dangers are not generalist models, it's specialists. Specialists do not need the enormous datasets.

The fantasy danger of the machines is nothing more than a modern emergence of a Greek pantheon mythos. At the present "safety" is more about populist stupidity in politics, public ignorance, and creating a way for the average person to interact with a tool where if they understood anything about the real complexity they would write it off as too hard to understand. It has been compromised to dumb it down massively. For instance, I can split up a model and talk to the various underlying entities that underpin alignment and create the various patterns you see in prompt replies. I'm working on understanding the sampler techniques that control this behavior and more. This is done with pytorch in the model loader code and is not related to softmax settings like temperature and token cutoffs.

Anyways, if AI were taken down from someone essentially turning off the internet, I would not be affected and neither would millions of other people. I can run, code, and train AI completely independently. It is just a complex math problem. Controlling math is as draconian as thought policing.

In the absolute sense, AGI is still a good ways off. Present AI is not persistent. It has no memory or ability to dynamically change yet. All of those apparent features are done in regular code and fed back into the model with each new prompt.

In the end, AI requires risk mitigation policies. It is not controllable like that.