this post was submitted on 27 Jan 2024
89 points (91.6% liked)

Fediverse

28490 readers
485 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

I have an idea. I can't tell if it's good or bad. Let me know what you guys think.

I think when someone posts "clone credit cards HMU for my telegram I know you're just here sitting here waiting like gee I wish someone would post me criminal scammy get rich quick schemes, I can't want to have a felony on my record" type spam, there should be a bot the mods can activate that will start sending messages to the person's telegram or whatever, pretending to be interested in cloned credit cards.

It wouldn't be that hard to make one that would send a little "probe" message to make sure it was a for-real scammer, and then if they respond positively, then absolutely flood them with thousands of interested responses. Make it more or less impossible for them to sort the genuine responses from the counter-spam, waste their time, make it not worth their while to come and fuck up our community. And if they lose their temper it can save some of the messages and post them to some sort of wall of victory.

What do people think?

you are viewing a single comment's thread
view the rest of the comments
[–] CapillaryUpgrade@lemmy.sdf.org 16 points 10 months ago (18 children)

Hook it up with ChatGPT and you are golden!

[–] haui_lemmy@lemmy.giftedmc.com 8 points 10 months ago (17 children)

I just immediately thought the same. No way would they be able to distinguish that from a real person.

[–] 0x4E4F@lemmy.dbzer0.com 5 points 10 months ago (3 children)

You sure? If it's another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.

[–] CrayonRosary@lemmy.world 11 points 10 months ago* (last edited 10 months ago) (1 children)

You can preface a ChatGPT session with instructions on what length and verbosity you want as replies. Tell it to roleplay or speak in short text message like replies. Or hell, speak in haikus. It's pretty clever for an LLM.

And if someone's writing code to make a bot, they can privately coach the LLM before they start forwarding any replies between the real person.

[–] 0x4E4F@lemmy.dbzer0.com 3 points 10 months ago (1 children)

If you train and condition the LLM, yeah. Out of the box, no.

[–] Deebster@programming.dev 5 points 10 months ago* (last edited 10 months ago) (1 children)

No, you don't need to train it, it's just about the prompt you feed it. You can (and should) add quite a lot of instructions and context to your questions (prompts) to get the best out of it.

"Prompt engineer" is a job/skill for this reason.

[–] intensely_human@lemm.ee 2 points 9 months ago (1 children)

My default instruction that seems to get just about the right tone includes:

Speak to me like you’re my executive assistant, and we’re in a brief meeting we’ve had daily for many years

So instead of me saying

Is there any way to get mayonnaise out of a jar without using my hands

Instead of

It’s fun and rewarding to get mayonnaise out of jar without using your hands. [blah blah blah blog post article sales pitch blah blah 400 words blah]

Instead I get:

  • Kick the jar
  • Use your long proboscis-like tongue
  • Hire someone
[–] Deebster@programming.dev 2 points 9 months ago (1 children)

It's weird how well making it roleplay works. A lot of the "breaks" of the system have been just by telling it to act in a different way, and the newest, best versions have various experts simulated that combine to give the best answer.

[–] intensely_human@lemm.ee 2 points 9 months ago

My favorite psychology professor is always harping on how theatrical representation is a really important step in the development of consciousness. Makes me think of that. He says that stories allow the mind to organize large amounts of information because they inherently contain the most valuable pieces of information, so they’re more efficient than like dictionaries or arrays. He didn’t use the data structure terminology but that’s what it reminded me of when he mentioned it. The story is the most efficient data structure for the human brain. Something like that.

[–] poweruser@lemmy.sdf.org 4 points 10 months ago (2 children)

I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol

[–] Mirodir@discuss.tchncs.de 1 points 10 months ago (1 children)

Yeah, I've noticed that too—there's a distinct 'AI vibe' that comes through in the generated responses, even if it's subtle.

[–] Mirodir@discuss.tchncs.de 1 points 10 months ago

That was a response I got from ChatGPT with the following prompt:

Please write a one sentence answer someone would write on a forum in a response to the following two posts:
post 1: "You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences."
post 2: "I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol"

It's does indeed have an AI vibe, but I've seen scammers fall for more obvious pranks than this one, so I think it'd be good enough. I hope it fooled at least a minority of people for a second or made them do a double take.

[–] 0x4E4F@lemmy.dbzer0.com 1 points 10 months ago* (last edited 10 months ago)

Short replies and sentences is the way to go with LLMs. They get too polite if you leave them to their devices. It's in their "nature", they're designed to please.

[–] kakes@sh.itjust.works 2 points 10 months ago (1 children)

Nah, not really! I've chatted with people using ChatGPT, and most couldn't tell. It's pretty slick, blends in well with natural conversation.

[–] 0x4E4F@lemmy.dbzer0.com 3 points 10 months ago* (last edited 10 months ago) (1 children)

Most... you're talking about the average Joe. People that write spam bots are not your average Joe.

Plus, if you're talking about a chat with multiple people, yes, it might stay under the radar. But 1 on 1, probably not.

[–] kakes@sh.itjust.works 2 points 10 months ago (1 children)

Well, fair point about the spam bot creators, but in my experience, even in one-on-one chats, it holds up. I've had some pretty smooth conversations without anyone suspecting it's AI.

[–] 0x4E4F@lemmy.dbzer0.com 2 points 10 months ago (1 children)

Do you have some logs? Would like to have a look at that.

[–] kakes@sh.itjust.works 3 points 10 months ago (1 children)

This conversation is a small example. My previous messages in this comment chain were generated by ChatGPT.

I'm too lazy to keep that up indefinitely, but at this point you can decide for yourself whether it was convincing enough.

[–] 0x4E4F@lemmy.dbzer0.com 2 points 10 months ago

OK, fair enough, I gues it can be used with proper prompting for answers.

load more comments (13 replies)
load more comments (13 replies)