LibertyLizard

joined 1 year ago
[–] LibertyLizard@slrpnk.net 12 points 8 months ago* (last edited 8 months ago) (2 children)

There are a lot of potential explanations. In essence they built a model to categorize brain features into male and female, and then tested this against their label of male or female on each brain. So this could result from problems with the model predictions—or just as easily from their “correct” labeling of each brain as male or female.

So a big question is how did they define male and female? By genetics? By reproductive anatomy? By self reported identity? This information was not in the article. All of these things are very likely correlated with things happening in the brain, but probably not perfectly. It’s worth noting that many definitions of sex do not consider gender identity at all—if such a definition was used, then a trans-man might be labeled female in their data, whether they have reckoned with their identity or not.

[–] LibertyLizard@slrpnk.net 36 points 8 months ago* (last edited 8 months ago) (5 children)

I have a suspicion that this is exactly what’s going on here and may be why past studies found no differences. AI is much better at quickly synthesizing complex patterns into coherent categories than humans are.

Also, 90% is not that good all things considered. The brain is almost certainly a complex mix of features that defy black and white categorization.

Hopefully we will be wise enough to not require trans people to prove their trans-ness scientifically. People have a right to do what they wish with their bodies and express their gender in a way that feels right to them, and should not be required to match some artificial physical diagnosis of what it means to be trans. Even if it turns out that most trans people do share certain brain structures or patterns. There will always be exceptions and that doesn’t mean we get to label someone’s identity as inauthentic.

[–] LibertyLizard@slrpnk.net 2 points 9 months ago

Agreed. Also while it’s impossible to say in any individual case I suspect people might be more likely to drive while inebriated if they believe the autopilot will be driving for them.

[–] LibertyLizard@slrpnk.net 2 points 9 months ago* (last edited 9 months ago) (1 children)

So I hear what you’re saying—what we really want to measure is deaths avoided versus those caused. But it’s a difficult thing to measure how many people the technology saved. So while I’m cognizant of this issue, I’m not sure how to get around that. That said, the article mentions that Tesla drivers are experiencing much higher rates of collisions than other manufacturers. There could be multiple factors at play here, but I suspect the autopilot (and especially Tesla’s misleading claims around it) is among them.

Also, while there may be an unmeasured benefit in reducing collisions, there may also be an unmeasured cost in inducing more driving. This has not been widely discussed in this debate but I think it is a big problem with self-driving technology that only gets worse as the technology improves.

[–] LibertyLizard@slrpnk.net 45 points 9 months ago (1 children)

Won’t someone please think of the shareholders?

[–] LibertyLizard@slrpnk.net 15 points 9 months ago

You’re right, I was conflating the two. However, I suspect there are more cases than just this one due to Tesla’s dishonesty and secrecy.

[–] LibertyLizard@slrpnk.net 9 points 9 months ago (3 children)

Tesla’s secrecy around its safety data makes it hard to do a robust analysis but here’s a decent overview: https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/

[–] LibertyLizard@slrpnk.net 4 points 9 months ago

Sure, that’s what I was referring to. But I’m realizing not everyone is as aware of the whole story here.

[–] LibertyLizard@slrpnk.net 62 points 9 months ago* (last edited 9 months ago) (14 children)

Teslas are already directly dangerous to his customers but our society is numb to traffic violence so people don’t care as much as they should. But “full self-driving” has already killed people.

Edit: removed “a lot” because while I suspect it is true, it remains unproven.

[–] LibertyLizard@slrpnk.net 2 points 9 months ago

Haha fair I meant the second one.

[–] LibertyLizard@slrpnk.net 4 points 9 months ago (2 children)

Don’t be evil more than 90% of the time.

[–] LibertyLizard@slrpnk.net 8 points 9 months ago (1 children)

Considering that openAI was originally a non-profit with a stated goal of making benevolent and safe AI, I think it’s worth noting how far they’ve fallen from that mission. They were supposed to have a different direction from purely for-profit orgs, but of course the for-profit arm has taken over like a tumor.

view more: ‹ prev next ›