this post was submitted on 25 Feb 2024
186 points (90.8% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

all 36 comments
sorted by: hot top controversial new old
[–] JoMiran@lemmy.ml 74 points 9 months ago (2 children)

The black and Asian Third Reich soldiers were pretty funny though.

[–] PullUpCircuit@iusearchlinux.fyi 6 points 8 months ago

I didn't see that. I got Oops All Asians when generating gay furry Nazis on Bing.

To answer further questions, it was in response to a post about how Russians are said to view Ukrainian.

[–] philodendron@lemdro.id 3 points 8 months ago

Agreed lol. I’m surprised they didn’t put more effort into stopping people from drawing nazis

[–] j4k3@lemmy.world 37 points 9 months ago (2 children)

So what. It means they overtrained, deployed, and had to choose between reverting to a model with known issues or training a new model. They probably tried a temporary fix with a LoRA and it failed so they have to wait on the next big version to finish training and those can take weeks even on massive data center class hardware.

People don't seem to have any fundamental understanding of AI here. It is all static tensor math. There is no persistence or learning inside the model. Any illusion of persistence is due to the loader code that turns your text into math tokens. That is just standard code.

There is no fundamental difference between an offline AI and the proprietary like Gemini. One loader code is just data mining while the other is not. Training has a sweet spot. If too much John Oliver is added, everything will generate as John Oliver, like absolutely everything.

[–] Virulent@reddthat.com 45 points 9 months ago (1 children)

No, the problem is that they filter prompts and inject new parameters into prompts specifically to avoid creating white subjects. It's so bad that, when asked to generate a chessboard, Gemini would only make one with black pieces.

[–] j4k3@lemmy.world 5 points 9 months ago

That would not have caused them to go offline. Modifying a hash table takes 0 minutes of down time. Likewise a LoRA layer takes no down time. The only reason to go completely offline is because they need to filter the base dataset and retrain from scratch. It means the error is so intertwined across so many neural layers that a simple extra filter layer is unable to address it.

The neural network is like a giant multi dimensional cloud in 3d but where there are more than 3 dimensions. All the stuff in the cloud are vector relationships. If there is some easily traversed path where neural connections are gravitating towards a simple modification like slice across that cloud can modify that easily traversed path ever so slightly to make it less easily traversed. This is something like a LoRA that can be tacked onto the model's math.

However, if the undesirable behavior is due to something like all roads leading to the center of a giant city metropolis, no slice across that cloud can subtly alter all of the neural paths without impacting adjacent data. It is all approximated floating point math where every concept and generation parameter is inner related. Things like bunny rabbit and Playboy playmate are stored in the same tables. If you try and make all bunny rabbits black, you are also altering all playmates. It is simply because there is an minor relationship between these concepts and therefore they share a vector space inside some tensor tables. There is a very big difference between how the initial table values are created across all layers and how a modified layer works. When things go really bad, the only option is to retrain the whole thing from scratch.

[–] slacktoid@lemmy.ml 18 points 9 months ago

Theres no such thing as too much John Oliver. this guy doesnt know what they are talking about.

[–] andrewrgross@slrpnk.net 18 points 9 months ago* (last edited 9 months ago) (3 children)

I think the interesting thing about this is that these LLMs are essentially like children: they don't have the benefit of years and years of social training to learn our complex set of unspoken rules and exceptions.

Race consciousness is such an ever-present element of our social interactions, and many of us have been habituated not to really notice it. So it's totally understandable to me that LLMs reproduce our highly contradictory set of rules imperfectly.

To be honest, I think that if we can set aside our tendency to understandably avoid these discussions because they're usually instigated by racist trolls, there's some weird and often unexamined social tendencies we can interrogate.

I think it's helpful to remind ourselves frequently that race is real like gender, but not like sex. Race exists because when people encountered new cultures, they invented a pseudoscience to create the concept of whiteness.

Whiteness makes no sense. Who is white is highly subjective, and it's always been associated with the dominant mainstream culture to which whiteness claims ownership. This means that you either buy into the racist falsehood that white culture is interchangeable with the default culture or it has no culture at all.. Whiteness really exists only in opposition to perceived racial inferiority. Fundamentally, that's all "white" means. It's a weird anachronistic euphemism for, "Not racially inferior".

There are plenty of issues with our racial construction of blackness and the quality of being Asian and east Asian and Desi and Indigenous and Latin, but none are quite as fucked up, imo, as the fact that we as a culture attempt to continue to use the concept of "Whiteness" as a non-racist construction. In my thinking, it can be a useful tool for studying the past and studying an unhealthy set of attitudes we're still learning to unlearn. But it's not possible to reform the concept, because it's fundamentally constructed upon beliefs we're trying to discard. If you replace every use of "white" with "not one of the lesser races", then I think you get a better understanding of why it's never going to stop causing problems as long as we try to use it in a non-racist way.

Today, people who were told growing up to view themselves as "white" now feel a frankly understandable sense of grievance and cultural alienation. Because we've begun acting more consistently and recognizing that there's really no benign version of white pride, but we never bothered to teach people to stop thinking of anyone as "white" or taught the people who identify as white to find pride in an actual culture. Midwestern in a culture. Irish is a culture. New Englander is a culture. White has never been a culture. But if we don't ever acknowledge that the entire concept's only value is as a tool to understand racism, it's inevitable that a computer repeating back to us our own attitudes is going to look dumb, inconsistent and either racially biased for or against white people.

[–] Prunebutt@slrpnk.net 14 points 9 months ago (2 children)

I think the interesting thing about this is that these LLMs are essentially like children

Naw, dog. LLMs are nothing like children. A child has an inaccurate model of the world in their heads. I can explain things to them and they'll update their believs and understandings.

LLMs don't understand. Period.

[–] andrewrgross@slrpnk.net -1 points 9 months ago (4 children)

I think this rigid thinking is unhelpful.

I think this presentation -- which at 10 months old is already quite dated! -- does a good job examining these questions in a credible and credulous manner:

Sparks of AGI: Early Experiments with GPT4 (presentation) (text)

I fully recognize that there is a great deal of pseudomystical chicanery that a lot of people are applying to LLM's ability to perform cognition. But I think there is also a great deal of pseudomystical chicanary underlying the mainstream attitudes towards human cognition.

People point to these and say, 'They're not thinking! They're just making up words, and they're good enough at relating words to symbolic concepts that they credibly imitate understanding concepts! It's just a trick.' And I wonder: why are they so sure that we're not just doing the same trick?

[–] huginn@feddit.it 6 points 9 months ago (1 children)

I can't take that guy seriously. 16 minutes in he's saying the model is learning while also saying it's entirely frozen.

It's not learning, it's outputting different data that was always encoded in the model because of different inputs.

If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

If the LLM recited it back to you it's because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of "summarize this - layers" OR it had the entire cake recipe encoded already.

No learning, no growth, no understanding.

The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that "reason" is nothing more than having sufficient coverage of concepts to create models.

Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that's not unexpected, novel or even interesting.

What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

LLMs are not intelligent, they're complex.

But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

Do you comprehend how complex your mind is?

There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make... Which is also static. Only changed manually when they get into trouble or find bettI can't take that guy seriously. 16 minutes in he's saying the model is learning while also saying it's entirely frozen.

It's not learning, it's outputting different data that was always encoded in the model because of different inputs.

If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

If the LLM recited it back to you it's because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of "summarize this - layers" OR it had the entire cake recipe encoded already.

No learning, no growth, no understanding.

The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that "reason" is nothing more than having sufficient coverage of concepts to create models.

Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that's not unexpected, novel or even interesting.

What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

LLMs are not intelligent, they're complex.

But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

Do you comprehend how complex your mind is?

There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make... Which is also static. Only changed manually when they get into trouble or find better optimizations.

And it's still deeply flawed and incapable of most tasks. It's just very good at convincing you with generalizations.

[–] andrewrgross@slrpnk.net 2 points 9 months ago* (last edited 9 months ago) (1 children)

I agree with your factual assessments.

The points on which I think it makes sense to remain open minded are these:

  1. The question we're examining is not whether current LLMs or any LLM by itself is sentient, but whether they're a step towards it. I think we need to be humble because the end point of AGI is not something we can claim to understand at the stage. We can make very reasonable assessments like the ones you're making about what these specifically can't do by themselves. But could an could an LLM constitute a potential module within an AGI, for instance? If a future system combined an LLM with a mechanism for self examination and self-guided retraining, what might be the product? I think these are reasonable ideas to consider.

  2. I really think we need recognize the subjectivity at play here and formulate our inquiry around what functions it can perform without getting sidetracked into its internal state. We can never know if any machine can experience love. But we can assess whether a machine can convince a human that it loves them. If a machine were to create a work of art that humans found beautiful and innovative, we can't know if the machine is able to appreciate beauty, but we can infer that it's achieved a certain level of capability which we associate with artistry when demonstrated by humans. This is an issue that arrises when discussing art made by elephants. Are elephant painters truly creative, or just experimenting with the tools? I think that's an unproductive question to ask. I think we need to benchmark primarily based on overall performance regardless of internal states, because of point three:

  3. I think we're comparing these systems to humans based on misconceptions of how sentient humans really are. Humans do many things which appear more intentional or motivated than we know them truly to be based on cognitive neuroscience. What we know about humans is based on our individual experiences within our own minds and observations of the performance of others. And this is remarkably biased toward overestimating the depth of our own facilities. We grossly overestimate how much we talk before we think, for instance. And we cannot measure or prove a human's ability to feel love any more than we can for a machine. We know these things exist because we can experience them, and others have the persuasive ability to convince us that they experience them as well. But epistemologically, how do we define our experience of pain as essentially different from a machine which reports a diagnostic that it is damaged?

Ultimately, I agree with you on the broad strokes. I agree about the state of the current technology. I disagree with some of your certainty of the future of this technology, and the ways in which we assess it.

[–] huginn@feddit.it 3 points 9 months ago

Working through a response on mobile so it's a bit chunked. I'll answer each point in series but it may take a bit.

  1. that's not really what the video above claims. The presenter explicitly states that he believes GPT4 is intelligent, and that increasing the size of the LLM will make it true AGI. My entire premise here is not that an LLM is useless but that AGI is still entirely fantastical. Could an LLM conceivably be some building block of AGI? Sure, just like it could not be a building block of AGI. The "humble" position here is keeping AGI out of the picture because we have no idea what that is or how to get there, while we do know exactly what an LLM is and how it works. At its core an LLM is a complex dictionary. It is a queryable encoding of all the data that was passed through it.

Can that model be tweaked and tuned and updated? Sure. But there's no reason to think that it demonstrates any capability out of the ordinary for "queryable encoded data", and plenty of questions as to why natural language would be the queryable encoding of choice for an artificial intelligence. Your brain doesn't encode your thoughts in English, or whatever language your internal thoughts use if you're ESL+, language is a specific function of the brain. That's why damage to language centers in the brain can render people illiterate or mute without affecting any other capacities.

I firmly believe that LLMs as a component of broader AGI is certainly worth exploring just like any of the other hundreds of forms of genetic models or specialized "AI" tools: but that's not the language used to talk about it. The overwhelming majority of online discourse is AI maximalist, delusional claims about the impending singularity or endless claims of job loss and full replacement of customer support with ChatGPT.

Having professionally worked with GitHub Copilot for months now I can confidently say that it's useful for the tasks that any competent programmer can do as long as you babysit it. Beyond that any programmer who can do the more complex work that an LLC can't will need to understand the basics that an LLC generates in order to grasp the advanced. Generally it's faster for me to just write things myself than it is for Copilot to generate responses. The use cases I've found where it actually saves any time are:

  1. Generating documentation (has at least 1 error in every javadoc comment that you have to fix but is mostly correct). Trying documentation first and code generated from it never worked well enough to be worth doing.

  2. Filling out else cases or other branches of unit test code. Once you've written a pattern for one test it stamps out the permutations fairly well. Still usually has issues.

  3. Inserting logging statements. I basically never have to tweak these, except prompting for more detail by writing a ,

This all is expected behavior for a model that has been trained on all examples of code patterns that have ever been uploaded online. It has general patterns and does a good job taking the input and adapting it to look like the training data.

But that's all it does. Fed more training data it does a better job of distinguishing patterns, but it doesn't change its core role or competencies: it takes an input and tries to make it's pattern match other examples of similar text.

[–] Prunebutt@slrpnk.net 6 points 9 months ago

This way of thinking is accurate. And hyping LLMs to be a precursor to AGI is actually the unhelpful thing, IMHO.

I recommend you look a bit at the work Emily M. Bender is doing. She's a computational linguist and doesn't have much good to say about the "Sparks of AGI" paper.

why are they so sure that we’re not just doing the same trick?

Because: Even if we don't know what makes up conscience, we DO know a fair bit of how language works. And LLMs can mimic form, but lack some semblance of intentionality. Again, Emily M. Bender can summize this better as I could.

[–] PipedLinkBot@feddit.rocks 2 points 9 months ago

Here is an alternative Piped link(s):

presentation

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] DarkThoughts@fedia.io 1 points 9 months ago

Sorry but you're giving LLMs way too much credit. All they do is very crude guesswork based on patterns & pattern recognition, with a bunch of "randomness" added into it, to at least make it feel somewhat natural. But if you spent any lengthy time chatting with one, then the magic wears off very quickly.

But at least you didn't call them "AI".

[–] slacktoid@lemmy.ml -5 points 9 months ago (1 children)

Do you know what word2vec is and how those vectors are generated?

[–] Prunebutt@slrpnk.net 3 points 9 months ago (1 children)
[–] slacktoid@lemmy.ml -4 points 9 months ago (1 children)
[–] Prunebutt@slrpnk.net 1 points 8 months ago (1 children)
[–] slacktoid@lemmy.ml 0 points 8 months ago (1 children)

Well you dont know how these models work then. Bye.

[–] Prunebutt@slrpnk.net 0 points 8 months ago (1 children)

Because I refuse to answer your rude question? k, lol ^^

[–] slacktoid@lemmy.ml 1 points 8 months ago* (last edited 8 months ago) (1 children)

I assumed your answer was no you do not know what word2vec is and how those vectors are generated. and since word2vec is a fundamental building block of llms you dont know how those models work.

[–] Prunebutt@slrpnk.net 1 points 8 months ago* (last edited 8 months ago) (1 children)

"No" as in: I wont answer your question which is trying to put me in some kind of "gotcha" situation, regardless of how fluent I am in AI concepts.

Edit (addendum): We both know that regardless of what my answer was: you already made up your mind and thought I was full of crap. Why ask that question at all if not to dunk on me in case I said yes or no?

[–] slacktoid@lemmy.ml 2 points 8 months ago (1 children)

Its only a gotcha question if you dont know what it is. I mean youre talking about something so authoritatively.

I was ready for a genuine conversation until 'Whats your point' which told me you weren't going to be open to a conversation. And im too tired for this shit.

I see we are on the same side on a lot of things overall and i dont want some fringe thing to be a point of contention. Hope to see you around. Keep it sleazy.

[–] Prunebutt@slrpnk.net 1 points 8 months ago (1 children)

You have to be very aware that it's hard to gague your intent when posting. That's why emojis and tone tags were invented. The way you asked that rather specific question which I couldn't immediately connect to anything that was said (although it was obviously a technical term), made me cautious of someone trying to dunk on me. So I chose not to engage until you've made your intentions clear. Hence: "What's your point?"

I still think this is a legitimate reaction to a question whose connection to the conversation isn't straightforward.

[–] slacktoid@lemmy.ml 1 points 8 months ago

I see no value in dunking on people but i wasnt memeing either, I had a direct question to ask, and i dont know what the appropriate emoji is. Additionally, emojis get read differently by different people recreating the same issues with text. Communication is hard i just choose to not assume people are here to dunk on me (at least on lemmy). I want (especially from more labor minded people) to not talk about AI in a callous "plagiarism machine" way. There are issues with it and its here to stay.

I mean you can check people post and comment history fairly easily and gauge how they interact.

[–] fah_Q@lemmy.ca 12 points 9 months ago (1 children)

None as fucked up.* Laughs in Indian "caste system"

[–] andrewrgross@slrpnk.net 9 points 9 months ago

That's LEGIT.

I'm new to learning about caste discrimination, and every time I see it come up in the news I'm just gobsmacked. It seems very messed up.

[–] SkyNTP@lemmy.ml 4 points 9 months ago (3 children)

Here's an idea: what if the intent of the prompt had nothing to do with race, that it was prompting simple artistic expression no different than prompting hair, or shirt, or sky colour?

Whiteness makes no sense. Who is white is highly subjective.

Skin tone can be measured pretty objectively. We have colour standards for describing and reproducing colours with a degree of accuracy that is sufficient for practical purposes. The label "white" itself is quite non-specific. But the entire point of the AI is to fill in the blanks anyway, to generate content from non-specific prompts. I don't agree that trainers can't generate some consensus about the typical colour values for "white" skin tone. "I know it when I see it."

Society has an absurd and unhealthy obsession with race and all that baggage.

[–] andrewrgross@slrpnk.net 1 points 9 months ago* (last edited 9 months ago)

I think you're wildly missing the point.

When someone asks to see a "white family", they are not asking for a family with skin of a certain shade. They're asking for an image in which our pattern recognition identifies in their clothes, posture, hair style, and facial features that they look like people who could appear in a soap ad in the 1950's. That they look like people who feel totally welcome in their society. They live a certain lifestyle. Simply changing color is the point of the problem. Koreans look pretty white in skin color, but they have other facial features that communicate that their parents or ancestors father back left the land of their birth and traveled to the US likely after 1900. Additionally, based on their dress some people might look at an image of a family with a Korean dad and say, 'Great, that's a white family', while others would say, 'Why did the model generate this? I asked for a white family.'

There's a world of context that our current racial terminology can't capture because it's not suited to our modern understanding of culture.

[–] Prunebutt@slrpnk.net 0 points 8 months ago* (last edited 8 months ago)

Skin tone can be measured pretty objectively.

Yeah, so does your skull shape.

So-called "races" are social contructs, which have only tangential overlap with measurable reality.

"White people" is the european social construct of the "default" human being. The "absence" of race.

That's why "racism against white people" doesn't exist.

[–] KingThrillgore@lemmy.ml 13 points 9 months ago

I can see the reddit data is working out

[–] Luisp@lemmy.dbzer0.com 9 points 9 months ago

Turns out racism is systemic, who could have known?

[–] dtjones@lemmy.world 3 points 8 months ago

The guy who leads this group is extremely vocal (almost weirdly so) about white privilege and systemic racism. He is also white. It's true that many AI models have white-bias. The reasons for this are multi-faceted. Our datasets are grossly imbalanced against racial minorities. I also think I understand that for some darker-skinned races, it is more difficult for the model to extract relevant features from the shitty Flickr photos they scrape for these models.

That said, injecting words into the users prompt to force the model to generate minorities more often is an extremely naive approach. Kind of like if Google added "reddit" to all searches just because it worked for some specific test cases, but ignoring that you now no longer get any site except reddit. Probably the solution here looks like paying a lot of money for high quality datasets as well as investing in user education and more AI explainability of these tools.