keegomatic

joined 1 year ago
[–] keegomatic@lemmy.world 2 points 1 day ago

That’s a very interesting idea. It might also incentivize creators because it gives them a more stable audience that’s at least a little insured against viewership changes on any single platform due to changes in that platform outside of their control.

[–] keegomatic@lemmy.world 2 points 2 weeks ago

Right there with you on that

[–] keegomatic@lemmy.world 1 points 2 weeks ago (2 children)

It’s even worse on Threads, believe it or not.

[–] keegomatic@lemmy.world 9 points 2 weeks ago (4 children)

X sucks, but Threads is even worse. 99% of everything I have ever seen on Threads is pure distilled engagement bait, and half the time expanding replies gets stuck loading. I wish I were exaggerating, but I’m not.

[–] keegomatic@lemmy.world 7 points 1 month ago

Keep up the good work

[–] keegomatic@lemmy.world 5 points 1 month ago (2 children)

That’s great. The history communities on the other site were such great quality on average and I miss them. How do you have time to do all that?

[–] keegomatic@lemmy.world 6 points 1 month ago (5 children)

Oh shit you mean like AskHistorians? Is there enough density now for that?

[–] keegomatic@lemmy.world 1 points 1 month ago

Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

  • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
  • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
  • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
  • etc.

For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.

[–] keegomatic@lemmy.world 1 points 1 month ago (2 children)

May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.

[–] keegomatic@lemmy.world 1 points 2 months ago

I never said I thought training AI with the copyrighted work of others causes harm to others. If anything, I think training is analogous enough to human learning that it’s a gray area. However, I think there are different ethical concerns with AI training data than there are with piracy, and those concerns mostly arise from the profit being made from the models.

[–] keegomatic@lemmy.world 4 points 2 months ago (2 children)

It’s not hypocritical if you believe that theft is wrong because it hurts another person, rather than wrong because you don’t deserve the thing or that it offers you an unfair advantage. Your argument leans heavily on the latter but mine the former.

[–] keegomatic@lemmy.world 7 points 2 months ago (4 children)

That’s not quite true, though, is it?

$50 earned is yours to spend on anything. A $50 discount is offered by a vendor to entice you to spend enough of your money on them to make the discount worthwhile.

Pirates don’t pirate because they’re trying to save money on something they would have bought otherwise… typically they pirate because the amount they consume would bankrupt them if they purchased it through legitimate means, so they would never have been a paying customer in the first place.

So, if they wouldn’t have bought it anyway, and they’re not reselling it, did they really harm the vendor? Whether they pirated it or not, it wouldn’t affect the vendor either way.

That’s not really the same thing, in my opinion.

If you were able to pay for everything handily but pirated anyway, or if you resold pirated content, then yeah you have something similar to theft going on. But that’s not really the norm; those people are doing something bad irrespective of the piracy itself, aren’t they?

view more: next ›