this post was submitted on 05 May 2026
682 points (99.3% liked)

Technology

84433 readers
3885 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] codexarcanum@lemmy.dbzer0.com 38 points 2 days ago (2 children)

I used to work for one of the nation's largest survey marketplaces. Y'all have no idea how deep this hole goes.

Surveys\polls are largely requested by political polling groups, research teams, and ad agencies. They put those up on an auction block just like ads, and then we would route traffic into it from various places. Mostly the survey takers come from mobile games (take this 3 question survey for 20 Blorp Points kind of stuff) or survey taker apps that give you points for gift cards and such.

So even before bots, most polls are taken by "professional" survey takers who use banks of phones to maximize their point earnings. We spent a lot of energy on "proving" to the survey provider side that real humans were answering, and not using scripts or bots to just rapid finish them (answer B to everything kind of stuff). Using sophisticated bots to randomly answer was super common.

They were super ready for AI. We talked about it everyday, game planned how it would work, designed systems around it. "Synthetic survey" was the buzz word. Why ask humans for answers if the statistics machine can convincingly predict the answer for you? We proposed ideas like generating the prediction fast and early, then using actual polls to adjust the result towards reality over time. We had tools to track people and connect their spending to poll questions so we could ask follow up questions on purchases, to provide "lift" metrics to agencies on if their ads were working. We were working on the "verification can" tech, only it would have been "Answer this 10 question survey to continue watching your movie."

I was so glad to leave that place. They got bought and consolidated into the world's largest survey company a year later and they fired everyone else that had been left. All they wanted was the tech and the customers.

[–] StopTech@lemmy.today 11 points 2 days ago

Thanks for the inside view. I never knew a lot of the survey takers came from mobile games or that spending could be tracked.

Survey Monkey?

Now just get AI pollsters and AI readers and leave us alone.

[–] Darkard@lemmy.world 54 points 2 days ago
[–] Flower@sh.itjust.works 169 points 2 days ago (29 children)

Just one more thing losing contact with reality.

load more comments (29 replies)
[–] scarabic@lemmy.world 12 points 1 day ago* (last edited 1 day ago) (1 children)

Ah yes, “synthetic users.” This is being pushed at my job as well. We’re supposed to use AI to design the next feature for our website, then ask AI “users” what they think of it.

That’s not our entire vetting process - it’s supposed to replace someone just writing down an idea and saying “I think this is good.” And I agree that just firing from the hip like that is dumb. We want our product managers to do more research into their ideas before they get greenlit to be built.

The question is whether AI “synthetic users” add anything of value. The team that put this tool into service noted it has a “positivity bias,” aka “you’re absolutely right!” So we feed it an idea we think is good, and it says oh yes it’s very good.

It’s read every customer email we’ve ever received and every user research report ever conducted by our human UX researchers. But it’s still just not that useful. I think AI is very useful for summarization, searching, and collation of information, but this goes beyond that, asking AI to imagine it is a person and then come up with things to say about an entirely novel concept. And AI is not good at that.

[–] Buddahriffic@lemmy.world 1 points 1 day ago

You might as well just put all those emails into a hat and pull out random ones. Or maybe categorize them first and pick from the hats your feature falls under.

Try this: ask the AI how useful it is to ask an AI for "synthetic user feedback" and it will probably even tell you why this particular task is particularly stupid for an LLM. Ok, I tried it with Haiku, you might need to follow up with a question that mentions that experience and implementation specifics matter but aren't going to be in the context window before it will give an in-depth explanation about why this approach is a waste of resources, though using an AI to help summarize the important problem areas users want addressed can work, it just won't be able to tell you how you did.

[–] sudo@programming.dev 74 points 2 days ago (3 children)

“The idea behind silicon sampling is simple and tantalizing,” they write. “Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use AI agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”

Somebody invested money into this company. And there's at least hundreds, maybe thousands, of other businesses with these asinine ideas about how to use AI. They're all getting capital from someone who's supposed to be smart because they have capital. Remember that when llm providers cost correct token prices.

[–] Maggoty@lemmy.world 16 points 2 days ago (1 children)

We really need to kick this idea that rich people are smarter. The vast majority were born on third base and think they hit a home run.

[–] BarneyPiccolo@lemmy.cafe 9 points 2 days ago* (last edited 2 days ago)

I inherited my grandfather's robber baron fortune, I'm a GENIUS!

[–] jtrek@startrek.website 30 points 2 days ago

The whole venture capitalism thing is bullshit. It's just vibes and rich man hubris.

[–] StopTech@lemmy.today 9 points 2 days ago
[–] AmbitiousProcess@piefed.social 37 points 2 days ago (1 children)

"I should base my surveys about human behavior solely on responses from non-human machines" said... someone, apparently? Damn. 💀

[–] deft@lemmy.wtf 20 points 2 days ago

It's very funny they even bother to do that. Why not just lie from the beginning? Why bother building a bullshit pyramid everyone can smell??

[–] nondescripthandle@lemmy.dbzer0.com 67 points 2 days ago* (last edited 2 days ago)

Humans barely answer surveys as if they were human, this is some real derivative shit.

[–] SnarkoPolo@lemmy.world 8 points 1 day ago

"According to the latest polls, Americans favor Republican policies UNANIMOUSLY!"

[–] DarrinBrunner@lemmy.world 9 points 2 days ago (1 children)

Just as useless as product reviews on Amazon.

load more comments (1 replies)
[–] betanumerus@lemmy.ca 5 points 1 day ago

Sounds like something the GOP and O&G industry would do, with all their bots commenting horse dumpings all the time.

[–] AnUnusualRelic@lemmy.world 15 points 2 days ago (1 children)

But people could have replied that.!

[–] Tja@programming.dev 3 points 2 days ago

Every single answer was indistinguishable from that of a human!

[–] wampus@lemmy.ca 24 points 2 days ago (1 children)

Seeing as the vast majority of "polls" I've seen in the last 5-6 years have been "this was a poll done online, so we can't assign any certainty or margin of error, cause we have no idea who actually responded and it could've been just like, two dickheads with bots spamming nonsense, but the results were click baity enough for us to run a story" .......... I don't see how them cutting out the two dickhead middle-men, and just using their own bots, is really that much different.

[–] BarneyPiccolo@lemmy.cafe 9 points 2 days ago (2 children)

The only math class I ever enjoyed was a college statistics class which actually made sense to me. So I spent my life reading polls, always checking the sample size and the margin of error, because I knew how important those are to accuracy, etc. But that same knowledge also served to let me know that modern polls are becoming horsehit.

I remember hearing Rush Limbaugh telling his listeners to either refuse to take polls, or to lie on them and say the opposite. He, and others, taught MAGAs to disrespect polls (cuz polls are the enemy of predatory politics).

Also, many of the "pollsters" are MAGA operatives in disguise. Add unreliable pollsters to unreliable respondents, and you end up with a weird poll that doesn't reflect reality at all.

I no longer enjoy tracking polls. Too many of them have been games.

load more comments (2 replies)
[–] mrmisses@lemmy.world 39 points 2 days ago (1 children)

Republicans are going to love this

load more comments (1 replies)
[–] givesomefucks@lemmy.world 21 points 2 days ago (4 children)

First off, got a chuckle from the bot check..

The story quoted new poll findings by a company called Aaru, representing them as research based on the feedback of American adults. But according to an editor’s note, the piece had to be “updated to note that Aaru is an AI simulation research firm.”

In other words, Axios had failed to disclose that it was citing alleged “polling data” that wasn’t drawn from human respondents at all. Instead, it was dreamed up by a large language model —yet the latest sign of every imaginable industry trying to leverage AI, even when doing so makes absolutely no sense.

This was/is a problem, but giving up on stats because bad stats exist, is like refusing to ever eat food again because someone got you to try a sardine and spinach chocolate cupcake one time.

In fact, the first, last, and most often brought up topic in graduate level statistical analysis isn't about getting numbers, that's easy. The hard part is finding the flaws in numbers, even in your own that proves yourself wrong.

The vast majority of people never learn that, or learn that bad stats have been a problem as long as stats has existed. Even making it thru peer review doesn't always mean anything.

Like, every single time an article links to a study, do the due diligence and click, so what's going, what the numbers really say, and search who funds them.

It's not like you'll even know what to look for at first, but if you never try you'll never improve.

load more comments (4 replies)

Every LLM is a fascist propaganda machine

[–] Corkyskog@sh.itjust.works 10 points 2 days ago (1 children)

This is hilarious. And when it inevitably doesn't work, they will have a human tweak the statistics of the "AI". The irony of it all.

load more comments (1 replies)
[–] whaleross@lemmy.world 12 points 2 days ago

This is so stupid it is hilarious. I guess the first question to the customer is what they want to know, and the second question is what they want as a result.

[–] 9point6@lemmy.world 9 points 2 days ago (2 children)

Polling has been getting increasingly worse for decades now.

Their accuracy died with the monoculture and no amount of multilevel regression and post-stratification seems to compensate for that

load more comments (2 replies)
load more comments
view more: next ›