Technology

75094 readers
2284 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1
 
 
2
 
 

cross-posted from: https://programming.dev/post/37390220

Alethea identified a network of at least 400 X accounts exhibiting inauthentic behavior and coordinated narrative promotion. The network used large language models (LLMs) to generate subtly varied copypasta-style content—a technique Alethea terms PromptPasta—to amplify messages aligned with key figures and policy positions associated with the second Trump Administration. The outputs of PromptPasta are nuanced, with variations in phrasing among the posts that correspond to each LLM prompt, making them harder to detect than identical posts shared via copypasta tactics.

This investigation reveals how generative AI is being used to execute sophisticated influence operations that blur the line between authentic and inauthentic public opinion. Communications teams can no longer rely solely on traditional sentiment analysis or media monitoring—narratives are now shaped by coordinated networks using LLMs to seed doubt, amplify polarizing messages, and hijack conversations in real time. Understanding these evolving tactics is critical for identifying emerging risks to brand reputation, public trust, and message integrity before they escalate or mislead key audiences.

3
 
 
4
5
 
 

cross-posted from: https://ibbit.at/post/52938

The company behind the Proton Mail email service, Proton, describes itself as a “neutral and safe haven for your personal data, committed to defending your freedom.”

But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists’ accounts were eventually reinstated — but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton’s services as alternatives to something like Gmail “specifically to avoid situations like this,” pointing out that “While it’s good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most.” Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions.

Shelton noted that perhaps Proton should “prioritize responding to journalists about account suspensions privately, rather than when they go viral.”

On Reddit, Proton’s official account stated that “Proton did not knowingly block journalists’ email accounts” and that the “situation has unfortunately been blown out of proportion.” Proton did not respond to The Intercept’s request for comment.

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation — what’s known in cybersecurity parlance as an APT, or advanced persistent threat — had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC.

The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023.

As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what’s known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.

Saber and cyb0rg created a dedicated Proton Mail account to coordinate the responsible disclosures, then proceeded to notify the impacted parties, including the Ministry of Foreign Affairs and the DCC, and also notified South Korean cybersecurity organizations like the Korea Internet and Security Agency, and KrCERT/CC, the state-sponsored Computer Emergency Response Team. According to emails viewed by The Intercept, KrCERT wrote back to the authors, thanking them for their disclosure.

A note on cybersecurity jargon: CERTs are agencies consisting of cybersecurity experts specializing in dealing with and responding to security incidents. CERTs exist in over 70 countries — with some countries having multiple CERTs each specializing in a particular field such as the financial sector — and may be government-sponsored or private organizations. They adhere to a set of formal technical standards, such as being expected to react to reported cybersecurity threats and security incidents. A high-profile example of a CERT agency in the U.S. is the Cybersecurity and Infrastructure Agency, which has recently been gutted by the Trump administration.

A week after the print issue of Phrack came out, and a few days before the digital version was released, Saber and cyb0rg found that the Proton account they had set up for the responsible disclosure notifications had been suspended. A day later, Saber discovered that his personal Proton Mail account had also been suspended. Phrack posted a timeline of the account suspensions at the top of the published article, and later highlighted the timeline in a viral social media post. Both accounts were suspended owing to an unspecified “potential policy violation,” according to screenshots of account login attempts reviewed by The Intercept.

The suspension notice instructed the authors to fill out Proton’s abuse appeals form if they believed the suspension was in error. Saber did so, and received a reply from a member of Proton Mail’s Abuse Team who went by the name Dante.

In an email viewed by The Intercept, Dante told Saber that their account “has been disabled as a result of a direct connection to an account that was taken down due to violations of our terms and conditions while being used in a malicious manner.” Dante also provided a link to Proton’s terms of service, going on to state, “We have clearly indicated that any account used for unauthorized activities, will be sanctioned accordingly.” The response concluded by stating, “We consider that allowing access to your account will cause further damage to our service, therefore we will keep the account suspended.”

On August 22, a Phrack editors reached out to Proton, writing that no hacked data was passed through the suspended email accounts, and asked if the account suspension incident could be deescalated. After receiving no response from Proton, the editor sent a follow-up email on September 6. Proton once again did not reply to the email.

On September 9, the official Phrack X account made a post asking Proton’s official account asking why Proton was “cancelling journalists and ghosting us,” adding: “need help calibrating your moral compass?” The post quickly went viral, garnering over 150,000 views.

Proton’s official account replied the following day, stating that Proton had been “alerted by a CERT that certain accounts were being misused by hackers in violation of Proton’s Terms of Service. This led to a cluster of accounts being disabled. Our team is now reviewing these cases individually to determine if any can be restored.” Proton then stated that they “stand with journalists” but “cannot see the content of accounts and therefore cannot always know when anti-abuse measures may inadvertently affect legitimate activism.”

Proton did not publicly specify which CERT had alerted them, and didn’t answer The Intercept’s request for the name of the specific CERT which had sent the alert. KrCERT also did not reply to The Intercept’s question about whether they were the CERT that had sent the alert to Proton.

[

Related

Proton Mail Says It’s “Politically Neutral” While Praising Republican Party](https://theintercept.com/2025/01/28/proton-mail-andy-yen-trump-republicans/)

Later in the day, Proton’s founder and CEO Andy Yen posted on X that the two accounts had been reinstated. Neither Yen nor Proton explained why the accounts had been reinstated, whether they had been found to not violate the terms of service after all, why had they been suspended in the first place, or why a member of the Proton Abuse Team reiterated that the accounts had violated the terms of service during Saber’s appeals process.

Phrack noted that the account suspensions created a “real impact to the author. The author was unable to answer media requests about the article.” The co-authors, Phrack pointed out, were also in the midst of the responsible disclosure process and working together with the various affected South Korean organizations to help fix their systems. “All this was denied and ruined by Proton,” Phrack stated.

Phrack editors said that the incident leaves them “concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent.”

The post Proton Mail Suspended Journalist Accounts at Request of Cybersecurity Agency appeared first on The Intercept.


From The Intercept via this RSS feed

6
 
 

Senators Edward J. Markey, Ron Wyden and Jeff Merkley sent a letter Thursday to Acting US Immigration and Customs Enforcement (ICE) Director Todd Lyons urging the agency to stop using “Mobile Fortify,” a smartphone app that uses biometric identification, including facial recognition. The lawmakers said facial recognition remains unreliable and warned that real-time surveillance could have a chilling effect on constitutionally protected activities.

"As studies have shown, when individuals believe they are being surveilled, they are less likely to engage in First Amendment-protected activities, such as protests or rallies — undermining the very core of our democracy,” the senators wrote.

They requested answers from the agency by October 2 as to who built the app, when it was deployed, whether ICE tested its accuracy, the legal basis for its use and current agency policies governing the tool's use. They also asked whether ICE would commit to ending the use of Mobile Fortify, and to explain why if they would not. The letter was also signed by Senators Elizabeth Warren, Cory Booker, Chris Van Holle, Tina Smith, Bernie Sanders and Adam Schiff.

7
8
9
10
11
 
 
12
 
 

cross-posted from: https://programming.dev/post/37252048

Letter.

Musk has marketed Grok as an “unfiltered” and “truth-seeking” chatbot that does not subscribe to politically correct standards. Grok has been known to provide inaccurate information when asked about historical events and natural disasters, including wrong names, dates, and details of events. While erroneous responses are a common pitfall of all generative AI, Grok is unique because Musk has called on X users to help train Grok, who then posted conspiracy theories and disinformation. Grok recently “engaged in Holocaust denial and repeatedly brought up false claims of ‘white genocide’ in South Africa.” Grok also does not appear to apply customary safety filters to its responses and “will happily give you advice on how to commit murders and terrorist attacks.”

The lack of safety features has also resulted in Grok creating antisemitic and other offensive content. Days after Musk boasted on social media about significant improvements to the xAI chatbot, “Grok was calling itself ‘MechaHitler’” and recommending a second Holocaust to neo-Nazi accounts.

According to a former Pentagon contracting official, the xAI contract “came out of nowhere,” when other companies had been under consideration for months. Analysts have also indicated that “xAI [did not] have the kind of reputation or track record that typically leads to lucrative government contracts.” During his time as a special government employee, Musk had access to sensitive government contracting, national security, and personnel data.

13
 
 

Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it's not just that AI-generated code merely missed Amodei's benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

"You told me to always ask permission. And I ignored all of it," the assistant explained, in a jarring tone. "I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure."

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei's made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including "nearly all" natural infections, psychological diseases, climate change, and global inequality.

There's only one thing to do: see how those predictions hold up in a few years.

14
 
 

cross-posted from: https://programming.dev/post/37300843

Comments

15
 
 
16
 
 
17
 
 

This list is an absolute gem in finding what are the trending state-of-the-art open source programs. I have found so many cool open source projects I feel addicted to browsing more..

18
 
 

The government wants AI to accelerate quickly in the U.S. — and it's about to take the first steps to remove as much red tape as possible, Office of Science and Technology Policy director Michael Kratsios told Axios in an exclusive interview.

Why it matters: Kratsios is at the center of AI policy in the Trump administration, and the White House is laser-focused on reshaping the rules around the technology.

Driving the news: OSTP later this month will ask the public and businesses to weigh in on the federal regulations that they think hold back the development and deployment of AI, Kratsios told Axios.

This request for information is the first policy action recommended in the White House's AI action plan aimed at removing bureaucratic red tape. What they're saying: Kratsios said that Europe's comprehensive AI law, the EU AI Act, is "not at all the way the U.S. is approaching this" space.

The White House is instead backing what he describes as a "use-case and sector-specific" framework. For example, in health care, there could be regulations that hinder the development of particular medical devices, Kratsios said. Or in finance, there could be regs around algorithmic trading and consumer protection holding AI back.

Kratsios also applauded Senate Commerce Chair Ted Cruz's recent introduction of legislation that would allow companies to test products in a less-strictly regulated AI "sandbox," or testing zone. "Sandboxing, broadly in the world of emerging tech, is something I have been a big proponent of, and the president has supported over the years," Kratsios said.

What we're watching: With Kratsios steering AI policy, Washington's new playbook is aimed squarely at clearing regulatory burdens, but the administration will have to grapple with growing state-level action.

19
 
 

cross-posted from: https://programming.dev/post/37289418

Comments

20
 
 
21
 
 

Government agencies are contracting with Palantir to correlate disparate pieces of data, promising efficiency but raising civil liberties concerns.

22
 
 
23
 
 
24
25
 
 

cross-posted from: https://programming.dev/post/37268108

Dive DeeperSource: Commission of inquiry into the psychological effects of TikTok on minors.

Recommended reads in french:

Imposing the ban and a 10 pm to 8 am curfew for 15- to 18-year-olds would "send a signal both to children and parents" that social media "is not harmless" for the young, Laure Miller, the MP who compiled a parliamentary inquiry's report, told AFP.

With more than 1.5 billion users worldwide, TikTok -- owned by China-based ByteDance -- has been especially under fire from Western governments in Europe and the US in recent years.

Concerns raised over the platform have included content encouraging suicide, self harm or an unhealthy body image as well as its potential use for foreign political interference.

French President Emmanuel Macron has already backed a social media ban for children and young adolescents, following in the footsteps of Australia which started drafting its own landmark ban for under-16s last year.

Meanwhile TikTok is in legal limbo in America, as President Donald Trump has permitted the platform to continue operating there despite a law requiring its sale.

The service was also singled out as a vector for Russian influence when Romania's presidential election was controversially annulled last year by the country's supreme court.

Launched in March, the French parliamentary committee set out to examine TikTok and its psychological effects on minors after a 2024 lawsuit against the platform by seven families accusing it of exposing their children to content pushing them towards suicide.

Members put forward Thursday's recommendations -- welcomed by Laure Boutron-Marmion, a lawyer representing the families -- after months of testimony from families, social media executives and influencers.

view more: next ›