Natanael

joined 2 years ago
[–] Natanael@slrpnk.net 5 points 1 year ago

It's becoming more of a thing but a lot of projects are so old that they haven't been able to fix their entire build process yet

[–] Natanael@slrpnk.net 4 points 1 year ago* (last edited 1 year ago)

If anybody's wondering if there aren't more modern medicines and treatments...

Yes, but leeches are cheap and does the job just fine

[–] Natanael@slrpnk.net 9 points 1 year ago

Lots of local police departments in USA too frequently arrest people for bullshit charges when called out

[–] Natanael@slrpnk.net 5 points 1 year ago (1 children)

It can't do that silently, the user has to approve installation of root certs. This only works silently with apps which have broken (insecure) cert validation

[–] Natanael@slrpnk.net 32 points 1 year ago

Rookie numbers, I have 307336924 cloned repos

[–] Natanael@slrpnk.net 3 points 1 year ago (1 children)

The law says human creative expression

[–] Natanael@slrpnk.net 3 points 1 year ago (3 children)

The existing legal precedence in most places is that most use of ML doesn't count as human expression and doesn't have copyright protection. You have to have significant control over the creation of the output to have copyright (the easiest workaround is simply manually modifying the ML output and then only releasing the modified version)

[–] Natanael@slrpnk.net 19 points 1 year ago (3 children)

Training from scratch and retraining is expensive. Also, they want to avoid training on ML outputs as samples, they want primarily human made works as samples, and after the initial public release of LLMs it has become harder to create large datasets without ML stuff in them

[–] Natanael@slrpnk.net 11 points 1 year ago* (last edited 1 year ago) (1 children)

Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster's qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

But it's still not perfect, obviously. It doesn't make it stop hallucinating.

[–] Natanael@slrpnk.net 2 points 1 year ago
  • uses the same type of armor for females as certain other games, except with realistic defense stats instead
[–] Natanael@slrpnk.net 1 points 1 year ago* (last edited 1 year ago)

Any LLM won't have the right architecture to implement that kind of math. They are built specifically to find patterns, even obscure ones, that nobody knows of. They could start flagging random shit indirectly associated with gender like relative timing between jobs or rate of promotions, etc, and you wouldn't even notice it's doing it

[–] Natanael@slrpnk.net 1 points 1 year ago

They've just released support for running and subscribing to 3rd party labeling services, I'm sure somebody's going to make a filter for that you can subscribe to

view more: ‹ prev next ›