company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”
I have been saying this since 2016, when we were dealing with both Cambridge Analytica and Correct the Record flooding the internet with paid political speech masquerading as real people with real opinions who weren't being paid to spout nonsense.
Paid political speech online whether by a human or a bot, should legally be required to state that they are being paid to promote their statements. There should be hefty penalties, large fines for single instances (one person, one message) up to prison time for an organized group (something akin to RICO). The fines/prison time should be even more severe when AI generated messages are fraudulently being promoted as real humans, simply due to the industrial speed and scale AI generation allows.
Paid political advertising on television and radio has for a long time been required to state that it is paid. This should have been priority number one from the Democrats when Biden got into office and they held slim majorities in both houses,
Sure, there's nothing we can do about foreign bot farms, but that's not what this article is about. This is about a US company based in our nations capital whose goal is to spread disinformation abusively to impact public comment. This is a private company absolutely flooding an agency with an open public comment period for an agency proposition and killing the proposition through messages that are not from real people at all but from AI.
The fact that getting this under control at the very least within our own borders is not a priority for any politicians is a fucking travesty and makes our entire democratic apparatus an outright farce.

This was happening before AI, with less sophisticated tools, often called "Persona Management" that allowed one person to control numerous bots with pre-written scripts that could be called up depending on what was called for. The only difference the AI has made is the speed and scale at which the same can be done and be more convincingly not all culled from the same script.
https://www.axios.com/2017/12/15/bots-flooded-the-fcc-with-comments-about-net-neutrality-1513307159
Here's an article about a flood of bot comments to an FCC open comment regarding Net Neutrality in 2017, five years before OpenAI would release ChatGPT. So it's definitely been going on before the AI tools as they now exist were available. It's a quantitative difference, not a qualitative difference, in other words it's the same thing just larger scale due to the speed of AI.