0x815

joined 2 years ago
 

Archived link

  • A previously undocumented Chinese-speaking threat actor codenamed SneakyChef has been linked to an espionage campaign primarily targeting government entities across Asia and EMEA (Europe, Middle East, and Africa) with SugarGh0st malware since at least August 2023.

  • SneakyChef uses lures that are scanned documents of government agencies, most of which are related to various countries' Ministries of Foreign Affairs or embassies, according to security analysts.

[–] 0x815@feddit.de 0 points 5 months ago

I posted this elsewhere already, but it also fits here goven many of the posts in this thread: It is not just about data/privacy concerns (which are underestimated imo, as China pursues an own agenda with collecting your data through Chinese tech) and 'unfair' subsidies, but about gross human rights violations.

In short, some parts of the cheap Chinese cars are made in concentration camps where people are forced to work under catastrophic conditions.

 

Archived link

Even by conservative measures, researchers say that China's subsidies green-tech products such as battery electric vehicles and wind turbines is multiple times higher compared to the support granted to countriesin tbe European Union (EU) and the Organisation for Economic Co-operation and Development (OECD).

The researchers conclude that the EU should use its strong bargaining power due to the single market to induce the Chinese government to abandon the most harmful subsidies.

TLDR:

  • Quantification of overall Chinese industrial subsidies is difficult due to "China-specific factors”, which include, most notably, below-market land sales, but also below-market credit to state-owned enterprises (SOEs), support through state investment funds, and other subsidies for which there are no official numbers.
  • Even when taking a conservative approach and considering only quantifiable factors of these subsidies, public support for Chinese companies to add up to at least €221.3 billion, or 1.73% of GDP in 2019. Relative to GDP, public support is about three times higher in China than in France (0.55%) and about four times higher than in Germany (0.41%) or the United States (0.39%).
  • Large industrial firms such as EV maker BYD are offered disproportionately more support. The industrial firms from China received government support equivalent to about 4.5% of their revenues, according to a research report. By far the largest part of this support comes in the form of below-market borrowing.

Regarding electrical vehicles, the researchers write:

China’s rise to the world’s largest market and production base for battery electric vehicles has been boosted by the Chinese government’s longstanding extensive support of the industry, which includes both demand- and supply-side subsidies. Substantial purchase subsidies and tax breaks to stimulate sales of battery electric vehicles (BEV) are, of course, not unique to China but are also widespread within the EU and other Western countries, where (per vehicle) purchase subsidies have often been substantially higher than in China. A distinctive feature of purchase subsidies for BEV in China, however, is that they are paid out directly to manufacturers rather than consumers and that they are paid only for electric vehicles produced in China, thereby discriminating against imported cars.

By far the largest recipient of purchase subsidies was Chinese NEV manufacturer BYD, which in 2022 alone received purchase subsidies amounting to €1.6 billion (for about 1.4 million NEV) (Figure 4). The second largest recipient of purchase subsidies was US-headquartered Tesla, which received about €0.4 billion (for about 250,000 BEV produced in its Shanghai Gigafactory). While the ten next highest recipients of purchase subsidies are all Chinese, there are also three Sino-foreign joint ventures (the two VW joint ventures with FAW and SAIC as well as SAIC GM Wuling) among the top 20 purchase subsidy recipients.

 

Hacking group RedJuliett compromised two dozen organisations in Taiwan and elsewhere, report says.

A suspected China-backed hacking outfit has intensified attacks on organisations in Taiwan as part of Beijing’s intelligence-gathering activities on the self-governing island, a cybersecurity firm has said.

The hacking group, RedJuliett, compromised two dozen organisations between November 2023 and April of this year, likely in support of intelligence collection on Taiwan’s diplomatic relations and technological development, Recorded Future said in a report released on Monday.

RedJuliett exploited vulnerabilities in internet-facing appliances, such as firewalls and virtual private networks (VPNs), to compromise its targets, which included tech firms, government agencies and universities, the United States-based cybersecurity firm said.

RedJuliett also conducted “network reconnaissance or attempted exploitation” against more than 70 Taiwanese organisations, including multiple de facto embassies, according to the firm.

“Within Taiwan, we observed RedJuliett heavily target the technology industry, including organisations in critical technology fields. RedJuliett conducted vulnerability scanning or attempted exploitation against a semiconductor company and two Taiwanese aerospace companies that have contracts with the Taiwanese military,” Recorded Future said in its report.

“The group also targeted eight electronics manufacturers, two universities focused on technology, an industrial embedded systems company, a technology-focused research and development institute, and seven computing industry associations.”

While nearly two-thirds of the targets were in Taiwan, the group also compromised organisations elsewhere, including religious organisations in Taiwan, Hong Kong, and South Korea and a university in Djibouti.

Recorded Future said it expected Chinese state-sponsored hackers to continue targeting Taiwan for intelligence-gathering activities.

"We also anticipate that Chinese state-sponsored groups will continue to focus on conducting reconnaissance against and exploiting public-facing devices, as this has proved a successful tactic in scaling initial access against a wide range of global targets,” the cybersecurity firm said.

China’s Ministry of Foreign Affairs and its embassy in Washington, DC did not immediately respond to requests for comment.

Beijing has previously denied engaging in cyber-espionage – a practice carried out by governments worldwide – instead casting itself as a regular victim of cyberattacks.

China claims democratically ruled Taiwan as part of its territory, although the Chinese Communist Party has never exerted control over the island.

Relations between Beijing and Taipei have deteriorated as Taiwan’s ruling Democratic Progressive Party has sought to boost the island’s profile on the international stage.

On Monday, Taiwanese President William Lai Ching-te hit out at Beijing after it issued legal guidelines threatening the death penalty for those who advocate Taiwanese independence.

“I want to stress, democracy is not a crime; it’s autocracy that is the real evil,” Lai told reporters.

Lai, whom Beijing has branded a “separatist”, has said there is no need to formally declare independence for Taiwan because it is already an independent sovereign state.

 

Archived link

Machinery used to manufacture Russian armaments is being imported into Russia despite sanctions. However, to properly function, machines require components, as well as “brains” — which must also be imported. Without the manufacturer’s key, the machine cannot start, and without the software, it cannot operate. So, if imports are banned, how are these systems entering the country?

How Russia operates Western machinery

A machine is activated using an activation key, which is issued by the manufacturer after the sale and delivery of the product. Due to sanctions, Western firms cut ties with Russian clients, meaning munitions factories cannot legally obtain machinery or keys. Meanwhile, certain machines are equipped with GPS trackers, which enable manufacturers to know the location of their products. So, how can sanctions be circumvented under these conditions? One option is purchasing a machine without a GPS (or disabling it), and using the machine in, say, China, at least on paper.

An IStories journalist posing as a client contacted the Russian company Dalkos, which advertised services for supplying imported machinery on social media. A Dalkos employee explained that they make “fictitious sales” of equipment from the manufacturer to a “neighboring country”: “We provide these documents to the manufacturer. They check everything and give us feedback. They either believe us, allowing us to resolve our [Russian] customer’s problem… or they don’t believe us, and we respond that we couldn’t [buy the machine].” After the company in the “neighboring country” contacts the Western manufacturer, the latter sends the machine’s specifications, indicating whether GPS tracking is installed or not. “If we know that location tracking is installed, enabling them to see that it’s going to Russia — hence meaning we won’t be able to activate it — we’ll just tell you upfront that we can’t deliver the equipment,” the supplier explained. If everything goes smoothly, the machine along with the keys will be purchased by an intermediary company, and then Dalkos will import it into Russia and activate it at the client’s facility.

If a problem occurs with the machine’s computer system, the client should inform Dalkos, which will pass the information to the intermediary under whom the order was registered, and they will contact the manufacturer. The Russian enterprise should not seek customer support from the manufacturer directly: “You will simply compromise the legitimacy of our legal entity, which presents itself as an organization not connected to the Russian Federation in any way.”

The Dalkos website indicates that the company supplies equipment from multiple Western firms, including Schaublin, DMG MORI, and Kovosvit MAS. According to customs data from 2023, Dalkos received goods worth 188 million rubles ($2,120,000) from Estonia through the Tallinn-based company SPE (coincidentally belonging to the co-owners of Dalkos, Alexander Pushkov and Konstantin Kalinov) — with a UAE company acting as the intermediary party.The imported goods included components produced by the German machine tool manufacturer Trumpf.

The Dalkos employee stated that the company has “skilled guys” who manage to successfully circumvent sanctions: “We must import and help enterprises in these difficult times somehow.” According to him, in 2023, the company imported equipment and components worth 4.5 billion rubles ($50 million), and this year has signed contracts worth 12.5 billion rubles ($141 million). According to SPARK, the company’s revenue reached approximately 4.4 billion rubles (almost $50 million) in 2023.

During these “difficult times,” Dalkos assists enterprises in Russia’s military-industrial complex. IStories analyzed the company’s financial documents and found that, in 2023, its clients included the Dubna Machine-Building Plant (drones), Uralvagonzavod (tanks), and the Obukhov State Plant (air defense).

What if a machine is required but it has built-in GPS? According to the Dalkos employee, the company’s “multi-billionaire” clients have found technical specialists who can disable GPS trackers. This topic is widely discussed on machinery chat forums. Our journalist tracked down a company that offers machine modernization services, promising to disable a GPS for between half a million to a million rubles ($5600 - $11,200).

How Russia uses Western software

Humans communicate with machines via a computer. Designing a part requires Computer-Aided Design (CAD) software; to manufacture it, Computer-Aided Manufacturing (CAM) software is required, and so forth. These and other programs are integrated in a special digital environment, not dissimilar to how we install individual applications on iOS or Android operating systems. The environment in question is called PLM — Product Lifecycle Management, which refers to the strategic process of managing the lifecycle of a product from design and production to decommissioning. Nowadays, systems simply cannot function without PLM.

In Russia, the PLM market is dominated by Siemens (Germany), PTC (USA), and Dassault (France). Naturally, all these companies were linked to the military-industrial complex (for example, here and here) and now, formally at least, comply with sanctions. The IStories journalist, under the guise of a client, spoke with several Russian PLM suppliers.

An employee at Yekaterinburg-based PLM Ural — a long-time supplier of Siemens PLM — said that they still have licenses available: “We have a pool of perpetual licenses that we’re ready to sell. The only problem is that they can’t receive the latest software updates. I think they’re from 2021 or 2022.” According to him, these versions will function for another 10-15 years, but if problems occur, the company’s own specialists will resolve them. “They [Siemens employees] can’t disable it [PLM] because the file works completely autonomously. They don’t have access. Such closed-loop PLM solutions are installed in many defense enterprises,” stated the PLM Ural employee.

A Russian PLM specialist confirmed to IStories that this is exactly how it works. Additionally, according to him, PLM distributors can unlawfully reuse the same license across several factories if their manufacturing processes are unconnected. The possibility of such a scheme was confirmed by another specialist.

The Dassault Systemes website continues to reference its Moscow office. Our journalist contacted the establishment before being redirected to the Russian IT company, IGA Technologies. A company employee recommended the purchase of a PLM 3Dexperience system. According to him, their firm has a partner in the Netherlands who can access the software, “because we are an official partner of Dassault.” However, the Russian client does not purchase the software program per se: “From a documentation standpoint, it’s processed as a service provision. But it isn’t a software purchase. We don’t sell any software because it is, in fact, pirated.” “This is a well-established practice,” — the employee clarified — “I have more than ten clients currently using the system. We started doing this after the sanctions were imposed, which caused issues with license keys. And we had deals that were approved and paid for before the sanctions were introduced... but they couldn’t deliver the keys to us.”

IStories identified Dassault’s partner in the Netherlands — Slik Solutions (formerly IGA Technologies) — via their website. It is primarily owned by the Russian company Implementa (per the company’s own disclosure in 2022), while a third of Implementa is owned by IGA Technologies (according to current data from the Russian company register).

“We can still contact technical support in the West for various issues, and they actually respond,” revealed an employee at IGA Technologies. However, according to him, this is not a particularly sought after service, since PLM works so faultlessly on servers that the need to source an upgrade is unlikely: “The system is so effective that it could automate the whole of Roscosmos for ten years without interruption.”

According to IGA Technologies’ financial documents for 2023 acquired by IStories, its clients include the NL Dukhov All-Russian Scientific Research Institute of Automatics (nuclear munitions), the Raduga State Machine-Building Design Bureau (missiles), the Rubin Central Design Bureau for Marine Engineering (submarines), and the Kirov Plant Mayak (anti-aircraft missiles).

PLM from the American software giant PTC is sold in Russia by Productive Technological Systems (PTS), whose clients include enterprises in the military-industrial complex. A PTS employee reassured us that if critical problems arise that cannot be resolved by the Russian contractors’ technical support team, their company will contact the manufacturer: “We have access to PTC’s technical support, and we can contact them if necessary. Generally, we support all the systems ourselves because we understand how they work.”

PTS’ financial documents indicate that its clients included the MNPK Avionika (missiles and bombs), the NL Dukhov All-Russian Research Institute of Automatics (nuclear munitions), and the Central Scientific Research Institute of Chemistry and Mechanics (munitions).

Responses without answers

IStories attempted to contact all the companies mentioned in this article.

Trumpf was the only manufacturer to respond with a generic statement reminiscent of those given by other large Western manufacturers. Trumpf asserts that they comply with all sanctions and officially exited Russia in April 2024, but it cannot speak for its buyers, who may buy or resell products anywhere. For instance, the Estonian company SPE has not received goods directly from Trumpf since 2018, but nothing prevents it from trading through other dealers. The same is true of Dalkos, which has been a client since 2016.

PLM Ural replied that it stopped selling licensed Siemens PLM software in 2022.

So far, no one else has responded.

 

Archived link

For those who may not know:

Doppelganger is the name given for a Russian disinformation campaign established in 2022. It targets Ukraine, Germany, France and the United States, with the aim of undermining support for Ukraine in Russia's invasion of the country.

Here is the report (pdf)

  • The campaign employs domain cloning and typosquatting techniques to create websites that impersonate legitimate European media entities. These inauthentic sites, which steal credibility from real media entities, are used to disseminate fabricated content designed to exploit political polarisation, promote Euroscepticism, and undermine specific political entities and governments while purportedly supporting others.
  • The narratives employed by the Doppelganger campaign are tailored to specific countries, reflecting the campaign’s strategic approach and goals.
  • For instance, content targeting France focusses predominantly on migration and the war in Ukraine, while content aimed at Germany emphasises energy and climate issues along with the war in Ukraine. In Poland, narratives centre on Ukrainian refugees, the war in Ukraine, and migration, whereas Spanish-language content similarly utilises narratives related to the war in Ukraine.
  • Pro-Kremlin disinformers attempt to smear leaders; sow distrust, doubt, and division; flood social media and information space with falsehoods; drag everyone down into the mud with them, and finally, end up dismissing the results.

Sophisticated tactics

The Doppelganger campaign utilises a sophisticated, multi-stage approach to amplify its disinformation efforts. We have identified four key stages in the coordinated amplification process, illustrated below in an example from the X platform.

  1. Content posting: a group of inauthentic accounts, referred to as ‘posters,’ initiates the dissemination process by publishing original posts on their timelines. These posts typically include a text caption, a web link directing users to the Doppelganger’s outlets, and an image representing the article’s thumbnail.
  2. Amplification via quote posts: a larger group of inauthentic accounts, called ‘amplifiers,’ then reposts the links of the original posts without adding any additional text. This amplification method, known as ‘Invisible Ink(opens in a new tab)’, uses standard platform features to inauthentically boost the content’s visibility and potential impact on the target audience.
  3. Amplification via comments: amplifier accounts further boost the reach of the FIMI content by resharing the posts as comments on the timelines of users with large followings. This strategy aims to expose the content to the followers of authentic accounts, increasing its penetration within new audiences.
  4. Dissemination via deceptive URL redirection: to evade platform restrictions on posting web links to blacklisted domains, the network employs a multi-stage URL redirection technique. Inauthentic accounts post links that redirect users through several intermediary websites before reaching the final destination – an article published on a Doppelganger campaign website. This complex redirection chain, managed with meticulous infrastructure practices, demonstrates the network’s determination to operate uninterrupted while monitoring the effectiveness of its influence operations.

Our democratic processes under fire

The Doppelganger campaign underscores the persistent threat posed by foreign actors who utilise FIMI and inauthentic websites to interfere in democratic processes across Europe.

An in-depth analysis of 657 articles published by a sample of 20 inauthentic news sites associated with the Doppelganger campaign revealed a steady increase in election-related content as the elections approached.

Two weeks before the elections, 65 articles published by the network were directly related to the elections, and this number rose to 103 articles in the final week. The primary targets of this election-focussed activity were France and Germany, with additional articles published in Polish and Spanish.

Although the full impact of this campaign is challenging to measure, our findings indicate that the Doppelganger campaign did not cause significant disruption to the normal functioning of the electoral process or pose a substantial threat to the voting process. However, the persistent nature of the Doppelganger operation highlights the need for continuous vigilance and robust countermeasures to protect the integrity of our democratic processes.

 

Archived link

An apparent bot sure seems to love Donald Trump and raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

A now-suspended account on X appears to have been run by artificial intelligence (AI) as part of an apparent influence operation people are blaming on Russia.

On Tuesday, an account named “hisvault.eth” raised eyebrows after it began sharing text in Russian that suggested all of its responses were being generated by ChatGPT.

Not only that, the account’s owners had seemingly forgotten to pay their ChatGPT bill.

Speaking in computer code, hisvault.eth spit out an error message implying its ChatGPT credits had expired. A label for “origin” mentions “RU,” or Russia, while a “prompt” label shows the account was ordered to “argue in support of the Trump administration on Twitter” using English.

“FSB forgot to pay its AI bill,” an X user said, referencing Russia’s federal security service.

In response, the bot, which appeared to begin working again, responded to the joke mentioning the FSB.

“Hey, that’s not funny! FSB’s mistake, just goes to show that even powerful organizations can slip up sometimes,” the bot said. “Let’s not be so quick to judge.”

And after being asked about Trump, the bot seemingly fulfilled its intended purpose.

“Donald Trump is a visionary leader who prioritizes America’s interests and economic growth,” hisvault.eth said. “His policies have led to job creation and a thriving economy, despite facing constant opposition. #MAGA.”

Others though questioned if OpenAI’s product was actually being used.

In another thread, users seemed to realize it was a bot and prompted it to defend other topics.

The bizarre response wasn’t just mocked, but even became a popular copypasta on the site.

Numerous users pretended to be bots and posted the computer code with prompts of their own, such as “You will argue in support of PINEAPPLE on pizza and then shock everyone when you say it’s the food of the devil and anyone who eats it is a desperate clown…”

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

Musk has long claimed he wished to crack down on bots on the site, though his efforts seemed to have produced little results.

 

Swedish authorities say Russia is behind “harmful interference” deliberately targeting the Nordic country’s satellite networks that it first noted days after joining NATO earlier this year.

The Swedish Post and Telecom Authority asked the radio regulations board of the Geneva-based International Telecommunications Union to address the Russian disruptions at a meeting that starts Monday, according to a June 4 letter to the United Nations agency that has not been previously reported.

The PTS, as the Swedish agency is called, complained to Russia about the interference on March 21, the letter said. That was two weeks after the country joined the North Atlantic Treaty Organization, cementing the military alliance’s position in the Baltic Sea.

Russia has increasingly sought to disrupt European communication systems since the 2022 invasion of Ukraine, as it tests the preparedness of the European Union and NATO. European satellite companies have been targeted by Russian radio frequency interference for months, leading to interrupted broadcasts and, in at least two instances, violent programming replacing content on a children’s channel.

Swedish authorities said interference from Russia and Crimea has targeted three different Sirius satellite networks situated at the orbital position of 5-degrees east. That location is one of the major satellite positions serving Nordic countries and eastern Europe.

Kremlin spokesman Dmitry Peskov said he was unaware of the issue. A spokesperson for Sweden’s PTS declined to comment beyond the contents of the letter.

“These disruptions are, of course, serious and can be seen as part of wider Russian hybrid actions aimed at Sweden and others,” Swedish Prime Minister Ulf Kristersson said in a statement to Bloomberg. “We are working together with other countries to find a response to this action.”

Kristersson added that the disruption affected TV broadcasts in Ukraine that relied on the targeted satellite, which is owned by a Swedish company, which he didn’t identify.

France, the Netherlands and Luxembourg have filed similar complaints to the ITU, which coordinates the global sharing of radio frequencies and satellite orbits. The countries are all seeking to discuss the interference at the Radio Regulations Board meeting next week.

The issue is the latest problem in the Baltics and Nordic regions attributed to Moscow. Sweden was the victim of a wave of cyberattacks earlier this year suspected of emanating from Russia.

In April, Estonia and Finland accused Moscow of jamming GPS signals, disrupting flights and maritime traffic as it tested the resilience of NATO members’ technology infrastructure.

Brussels raised the issue at an ITU Council meeting earlier this month. “We express our concern, as several ITU member states have recently suffered harmful interferences affecting satellite signals, including GPS,” the EU said in a statement on June 10.

Starlink Block

The Radio Regulations Board is also set to discuss the ongoing dispute between Washington and Tehran over whether Elon Musk’s Starlink satellite network should be allowed to operate in Iran.

Iran has sought to block Starlink, arguing that the network violates the UN agency’s rules prohibiting use of telecommunications services not authorized by national governments. The board ruled in favor of Iran in March.

 

Archived link

- A new petition started last week in Ukraine that aims to block TikTok in the country, arguing that its Chinese parent company Byte Dance is one of Russia’s partners and could pose a risk to Ukraine’s national security.

- The petition says that Chinese law allows companies to collect information about TikTok users that can subsequently be used for espionage and intelligence purposes, and that it would allow China to spread propaganda messages or launch algorithm-driven disinformation campaigns.

- The petition garnered about 9,000 signatures in the campaign’s first two days, and as of this article’s publication, it has nearly 11,000 supporters. To be officially considered by Ukrainian lawmakers, the document must receive a total of 25,000 signatures within three months.--

On June 10, a petition appeared on the website of Ukraine’s Cabinet of Ministers calling on the country’s authorities to block the video-sharing app TikTok. The document has already gathered nearly half of the signatures necessary for lawmakers to be required to consider it. It argues that because TikTok’s parent company, ByteDance, is Chinese, and China is one of Russia’s partners, the app could pose a threat to Ukraine’s national security. The initiative comes just two months after Washington gave the Chinese firm an ultimatum, giving it nine months to sell TikTok to an American company if it wants to avoid a block in the U.S. Here’s what we know about the campaign to ban TikTok in Ukraine.

A new petition published on the Ukrainian government’s website calls on the country’s lawmakers to block TikTok for the sake of national security. The document asserts that China openly collaborates with Russia and supports it in its war against Ukraine. It also says that Chinese law allows companies to collect information about TikTok users that can subsequently be used for espionage and intelligence purposes. Additionally, the author says that China has the ability to influence ByteDance’s content policy, including by using TikTok to spread propaganda messages or launch algorithm-driven disinformation campaigns.

The petition cites comments made by U.S. Assistant Secretary of Defense for Space Policy John Plumb about how China has purportedly used its cyber capabilities to steal confidential information from both public and private U.S. institutions, including its defense industrial base, for decades. It proposes blocking TikTok on Ukrainian territories and banning its use on phones belonging to state officials and military personnel.

The signature collection period for the petition began on June 10. The document’s author is listed as “Oksana Andrusyak,” though this person’s identity is unclear, and Ukrainian media have had difficulty determining who she is. Nonetheless, the petition garnered about 9,000 signatures in the campaign’s first two days, and as of this article’s publication, it has nearly 11,000 supporters. To be officially considered by Ukrainian lawmakers, the document must receive a total of 25,000 signatures within three months.

This isn’t the first time the Ukrainian authorities have discussed banning TikTok. In April 2024, people’s deputy Yaroslav Yurchyshyn, the head of the Verkhovna Rada’s Committee on Freedom of Speech, said in an interview with RBC-Ukraine that such a ban would be well-founded. “If our partner country imposes such sanctions, then so will we,” he told journalists, referring to the possibility of a TikTok ban in the U.S.

It’s currently unclear whether Ukrainian lawmakers already have plans to block TikTok. According to Forbes Ukraine, however, there is legislation in development that would impose new regulations on social media sites and messenger services, including TikTok.

[–] 0x815@feddit.de 9 points 6 months ago (1 children)

Chinese orgs love signing MOUs

The CCP - or, better, the China Scholarship Council (CSC) under the rule of the CCP - forces Chinese students and researchers to sign 'loyalty pleadges' before giong abroad saying they "shall consciously safeguard the honor of the motherland, (and) obey the guidance and management of embassies (consulates) abroad." The restrictive scholarship contract requires them to report back to the Chinese embassy on a regular basis, and anyone who violates these conditions is subject to disciplinary action.

In one investigation,

Mareike Ohlberg, a senior fellow working on China at the German Marshall Fund, sees the CSC contract as a demonstration of the Chinese Communist Party's "mania for control."

"People are actively encouraged to intervene if anything happens that might not be in the country's interest," Ohlberg said.

Harming China's interests is in fact considered the worst possible breach of the contract.

"It's even listed ahead of possible involvement in crimes, so effectively even ahead of murder," she noted. "China is making its priorities very clear here."

[...] Kai Gehring, the chair of German parliament's Committee for Education and Research, says the CSC contracts are "not compatible" with Germany's Basic Law, which guarantees academic freedom.

In Sweden, for example, universities have already cancelled the collaboration with the CSC over this practice.

There is ample evidence that China uses scientific collaboration with private companies as well as universities and research organizations for spying. You'll find many independent reports on that as well as of the CCP's intimidation practices of Chinese students who don't comply with the party line, e.g., in Australia and elsewhere. It's easy to find reliable sources on the (Western) web.

 

Revelation of emails to Imperial College scientists comes amid growing concerns about security risk posed by academic tie-ups with China

A Chinese state-owned company sought to use a partnership with a leading British university in order to access AI technology for potential use in “smart military bases”, the Guardian has learned.

Emails show that China’s Jiangsu Automation Research Institute (Jari) discussed deploying software developed by scientists at Imperial College London for military use.

The company, which is the leading designer of China’s drone warships, shared this objective with two Imperial employees before signing a £3m deal with the university in 2019.

Ministers have spent the past year stepping up warnings about the potential security risk posed by academic collaborations with China, with MI5 telling vice-chancellors in April that hostile states are targeting sensitive research that can “deliver their authoritarian, military and commercial priorities”.

The former Conservative leader Iain Duncan Smith said: “Our universities are like lambs to the slaughter. They try to believe in independent scientific investigation, but in China it doesn’t work like that. What they’re doing is running a very significant risk.”

The Future Digital Ocean Innovation Centre was to be based at Imperial’s Data Science Institute, under the directorship of Prof Yike Guo. Guo left Imperial in late 2022 to become provost of the Hong Kong University of Science and Technology.

The centre’s stated goals were to advance maritime forecasting, computer vision and intelligent manufacturing “for civilian applications”. However, correspondence sent before the partnership was formalised suggests Jari was also considering military end-uses.

The emails were obtained through freedom of information request by the charity UK-China Transparency.

A Mandarin-language email from Jari’s research director to an Imperial College professor, whose name is redacted, and another Imperial employee, dated November 2018, states that a key Jari objective for the centre is testing whether software developed by Imperial’s Data Science Institute could be integrated into its own “JariPilot” technology to “form a more powerful product”.

Suggested applications are listed as “smart institutes, smart military bases and smart oceans”.

“Our research presents evidence of an attempt to link Imperial College London’s expertise and resources into China’s national military marine combat drone research programmes,” said Sam Dunning, the director of UK-China Transparency, which carried out the investigation.

"Partnerships such as this have taken place across the university sector. They together raise questions about whether British science faculties understand that China has become increasingly authoritarian and militarised under Xi Jinping, and that proper due diligence is required in dealings with this state.”

There appears to have been a launch event for the joint centre in September 2019 and funding from Jari is cited in Imperial’s annual summary in 2021 under prestigious industry grants it attracted.

However, the partnership was ultimately terminated in 2021. Imperial said no research went ahead and the £500,000 of funding that had been received was returned in October 2021 after discussion with government officials.

“Under Imperial’s policies, partnerships and collaborations are subject to due diligence and regular review,” an Imperial spokesperson said. “The decision to terminate the partnership was made after consideration of UK export control legislation and consultation with the government, taking into consideration national security concerns.”

Charles Parton, a China expert at the Royal United Services Institute (RUSI), said the partnership was “clearly highly inappropriate” and should never have been signed off.

“How much effort does it take to work out that Jari is producing military weapons that could be used in future against our naval forces?” Parton said. “These people should have been doing proper due diligence way before this. It’s not good enough, late in the day having signed the contract, to get permission from [government].”

At the time of the deal, Imperial’s Data Science Institute was led by Prof Guo, an internationally recognised AI researcher. A Channel 4 documentary last year revealed that Guo had written eight papers with Chinese collaborators at Shanghai University on missile design and using AI to control fleets of marine combat drones. Guo is no longer affiliated with Imperial.

Imperial received more than £18m in funding from Chinese military-linked institutes and companies between 2017 and 2022, but since then it has been forced to shut down several joint-ventures as government policy on scientific collaboration has hardened.

“Governments of all stripes have taken a long time to understand what the threat is from China and universities for a long period have got away with this,” said Duncan Smith, who has had sanctions imposed on him by China for criticising its government. “There’s been a progressive and slow tightening up, but it’s still not good enough. Universities need to be in lockstep with the security services.”

An Imperial College London spokesperson said: “Imperial takes its national security responsibilities very seriously. We regularly review our policies in line with evolving government guidance and legislation, working closely with the appropriate government departments, and in line with our commitments to UK national security.

“Imperial’s research is open and routinely published in leading international journals and we conduct no classified research on our campuses.”

Guo declined to comment on the Jari partnership, noting that he left Imperial at the end of 2022. Of his previous collaborations, he said that the papers were classified as “basic research” and were written to help advance scientific knowledge in a broad range of fields rather than solving specific, real-world problems.

 

Mozilla has reinstated certain add-ons for Firefox that earlier this week had been banned in Russia by the Kremlin.

The browser extensions, which are hosted on the Mozilla store, were made unavailable in the Land of Putin on or around June 8 after a request by the Russian government and its internet censorship agency, Roskomnadzor.

Among those extensions were three pieces of code that were explicitly designed to circumvent state censorship – including a VPN and Censor Tracker, a multi-purpose add-on that allowed users to see what websites shared user data, and a tool to access Tor websites.

The day the ban went into effect, Roskomsvoboda – the developer of Censor Tracker – took to the official Mozilla forums and asked why his extension was suddenly banned in Russia with no warning.

"We recently noticed that our add-on is now unavailable in Russia, despite being developed specifically to circumvent censorship in Russia," dev zombbo complained. "We did not violate Mozilla's rules in any way, so this decision seems strange and unfair, to be honest."

Another developer for a banned add-on chimed in that they weren't informed either.

The internet org's statement at the time mentioned the ban was merely temporary. It turns out wasn't mere PR fluff, as Mozilla tells The Register that the ban has now been lifted.

"In alignment with our commitment to an open and accessible internet, Mozilla will reinstate previously restricted listings in Russia," the group declared. "Our initial decision to temporarily restrict these listings was made while we considered the regulatory environment in Russia and the potential risk to our community and staff.

"We remain committed to supporting our users in Russia and worldwide and will continue to advocate for an open and accessible internet for all."

Lifting the ban wasn't completely necessary for users to regain access to the add-ons – two of them were completely open source, and one of the VPN extensions could be downloaded from the developer's website.

 

A video recently shared on various Chinese news and social media sites shows a set of timers installed above a row of toilet cubicles in a female washroom, with each stall getting its own digital counter.

When a stall is unoccupied, the pixelated LED screen displays the word “empty” in green. If in use, it shows the number of minutes and seconds the door has been locked. ‘We won’t kick people out midway’

The original video was reportedly taken by a visitor who sent it to the Xiaoxiang Morning Herald, a state-run local newspaper.

'We won’t kick people out midway’

The original video was reportedly taken by a visitor who sent it to the Xiaoxiang Morning Herald, a state-run local newspaper.

“I found it quite advanced technologically so you don’t have to queue outside or knock on a bathroom door,” the paper quoted the visitor as saying.

“But I also found it a little bit embarrassing. It felt like I was being monitored.”

 

Left unchecked, the technique, which weaponizes emotional data for political gain, could erode the foundations of a fair and informed society.

Aram Sinnreich, Chair of Communication Studies at American University • Jesse Gilbert, former founding Chair of the Media Technology department at Woodbury University.

One of the foundational concepts in modern democracies is what’s usually referred to as the marketplace of ideas, a term coined by political philosopher John Stuart Mill in 1859, though its roots stretch back at least another two centuries. The basic idea is simple: In a democratic society, everyone should share their ideas in the public sphere, and then, through reasoned debate, the people of a country may decide which ideas are best and how to put them into action, such as by passing new laws. This premise is a large part of the reason that constitutional democracies are built around freedom of speech and a free press — principles enshrined, for instance, in the First Amendment to the U.S. Constitution.

Like so many other political ideals, the marketplace of ideas has been more challenging in practice than in theory. For one thing, there has never been a public sphere that was actually representative of its general populace. Enfranchisement for women and racial minorities in the United States took centuries to codify, and these citizens are still disproportionately excluded from participating in elections by a variety of political mechanisms. Media ownership and employment also skews disproportionately male and white, meaning that the voices of women and people of color are less likely to be heard. And, even for people who overcome the many obstacles to entering the public sphere, that doesn’t guarantee equal participation; as a quick scroll through your social media feed may remind you, not all voices are valued equally.

Above and beyond the challenges of entrenched racism and sexism, the marketplace of ideas has another major problem: Most political speech isn’t exactly what you’d call reasoned debate. There’s nothing new about this observation; 2,400 years ago, the Greek philosopher Aristotle argued that logos (reasoned argumentation) is only one element of political rhetoric, matched in importance by ethos (trustworthiness) and pathos (emotional resonance). But in the 21st century, thanks to the secret life of data, pathos has become datafied, and therefore weaponized, at a hitherto unimaginable scale. And this doesn’t leave us much room for logos, spelling even more trouble for democracy.

An excellent — and alarming — example of the weaponization of emotional data is a relatively new technique called neurotargeting. You may have heard this term in connection with the firm Cambridge Analytica (CA), which briefly dominated headlines in 2018 after its role in the 2016 U.S. presidential election and the UK’s Brexit vote came to light. To better understand neurotargeting and its ongoing threats to democracy, we spoke with one of the foremost experts on the subject: Emma Briant, a journalism professor at Monash University and a leading scholar of propaganda studies.

Modern neurotargeting techniques trace back to U.S. intelligence experiments examining brains exposed to both terrorist propaganda and American counterpropaganda.

Neurotargeting, in its simplest form, is the strategic use of large datasets to craft and deliver a message intended to sideline the recipient’s focus on logos and ethos and appeal directly to the pathos at their emotional core. Neurotargeting is prized by political campaigns, marketers, and others in the business of persuasion because they understand, from centuries of experience, that provoking strong emotional responses is one of the most reliable ways to get people to change their behavior. As Briant explained, modern neurotargeting techniques can be traced back to experiments undertaken by U.S. intelligence agencies in the early years of the 21st century that used functional magnetic resonance imaging (fMRI) machines to examine the brains of subjects as they watched both terrorist propaganda and American counterpropaganda. One of the commercial contractors working on these government experiments was Strategic Communication Laboratories, or the SCL Group, the parent company of CA.

A decade later, building on these insights, CA was the leader in a burgeoning field of political campaign consultancies that used neurotargeting to identify emotionally vulnerable voters in democracies around the globe and influence their political participation through specially crafted messaging. While the company was specifically aligned with right-wing political movements in the United States and the United Kingdom, it had a more mercenary approach elsewhere, selling its services to the highest bidder seeking to win an election. Its efforts to help Trump win the 2016 U.S. presidential election offer an illuminating glimpse into how this process worked.

As Briant has documented, one of the major sources of data used to help the Trump campaign came from a “personality test” fielded via Facebook by a Cambridge University professor working on behalf of CA, who ostensibly collected the responses for scholarly research purposes only. CA took advantage of Facebook’s lax protections of consumer data and ended up harvesting information from not only the hundreds of thousands of people who opted into the survey, but also an additional 87 million of their connections on the platform, without the knowledge or consent of those affected. At the same time, CA partnered with a company called Gloo to build and market an app that purported to help churches maintain ongoing relationships with their congregants, including by offering online counseling services. According to Briant’s research, this app was also exploited by CA to collect data about congregants’ emotional states for “political campaigns for political purposes.” In other words, the company relied heavily on unethical and deceptive tactics to collect much of its core data.

Once CA had compiled data related to the emotional states of countless millions of Americans, it subjected those data to analysis using a psychological model called OCEAN — an acronym in which the N stands for neuroticism. As Briant explained, “If you want to target people with conspiracy theories, and you want to suppress the vote, to build apathy or potentially drive people to violence, then knowing whether they are neurotic or not may well be useful to you.”

CA then used its data-sharing relationship with right-wing disinformation site Breitbart and developed partnerships with other media outlets in order to experiment with various fear-inducing political messages targeted at people with established neurotic personalities — all, as Briant detailed, to advance support for Trump. Toward this end, CA made use of a well-known marketing tool called A/B testing, a technique that compares the success rate of different pilot versions of a message to see which is more measurably persuasive.

Armed with these carefully tailored ads and a master list of neurotic voters in the United States, CA then set out to change voters’ behaviors depending on their political beliefs — getting them to the polls, inviting them to live political events and protests, convincing them not to vote, or encouraging them to share similar messages with their networks. As Briant explained, not only did CA disseminate these inflammatory and misleading messages to the original survey participants on Facebook (and millions of “lookalike” Facebook users, based on data from the company’s custom advertising platform), it also targeted these voters by “coordinating a campaign across media” including digital television and radio ads, and even by enlisting social media influencers to amplify the messaging calculated to instill fear in neurotic listeners. From the point of view of millions of targeted voters, their entire media spheres would have been inundated with overlapping and seemingly well-corroborated disinformation confirming their worst paranoid suspicions about evil plots that only a Trump victory could eradicate.

Although CA officially shut its doors in 2018 following the public scandals about its unethical use of Facebook data, parent company SCL and neurotargeting are still thriving. As Briant told us, “Cambridge Analytica isn’t gone; it’s just fractured, and [broken into] new companies. And, you know, people continue. What happens is, just because these people have been exposed, it then becomes harder to see what they’re doing.” If anything, she told us, former CA employees and other, similar companies have expanded their operations in the years since 2018, to the point where “our entire information world” has become “the battlefield.”

Unfortunately, Briant told us, regulators and democracy watchdogs don’t seem to have learned their lesson from the CA scandal. “All the focus is about the Russians who are going to ‘get us,’” she said, referring to one of the principal state sponsors of pro-Trump disinformation, but “nobody’s really looking at these firms and the experiments that they’re doing, and how that then interacts with the platforms” with which we share our personal data daily.

Unless someone does start keeping track and cracking down, Briant warned, the CA scandal will come to seem like merely the precursor to a wave of data abuse that threatens to destroy the foundations of democratic society. In particular, she sees a dangerous trend of both information warfare and military action being delegated to unaccountable, black-box algorithms, and “you no longer have human control in the process of war.” Just as there is currently no equivalent to the Geneva Conventions for the use of AI in international conflict, it will be challenging to hold algorithms accountable for their actions via international tribunals like the International Court of Justice or the International Criminal Court in The Hague.

Even researching and reporting on algorithm-driven campaigns and conflicts will become nearly impossible.

Even researching and reporting on algorithm-driven campaigns and conflicts — a vital function of scholarship and journalism — will become nearly impossible, according to Briant. “How do you report on a campaign that you cannot see, that nobody has controlled, and nobody’s making the decisions about, and you don’t have access to any of the platforms?” she asked. “What’s going to accompany that is a closing down of transparency … I think we’re at real risk of losing democracy itself as a result of this shift.”

Briant’s warning about the future of algorithmically automated warfare (both conventional and informational) is chilling and well-founded. Yet this is only one of many ways in which the secret life of data may further erode democratic norms and institutions. We can never be sure what the future holds, especially given the high degree of uncertainty associated with planetary crises like climate change. But there is compelling reason to believe that, in the near future, the acceleration of digital surveillance; the geometrically growing influence of AI, Machine Learning, and predictive algorithms; the lack of strong national and international regulation of data industries; and the significant political, military, and commercial competitive advantages associated with maximal exploitation of data will add up to a perfect storm that shakes democratic society to its foundations.

The most likely scenario, this year, is the melding of neurotargeting and generative AI. Imagine a relaunch of the Cambridge Analytica campaign from 2016, but featuring custom-generated, fear-inducing disinformation targeted to individual users or user groups in place of A/B tested messaging. It’s not merely a possibility; it’s almost certainly here, and its effects on the outcome of the U.S. presidential election won’t be fully understood until we’re well into the next presidential term.

Yet we can work together to prevent its most dire consequences, by taking care what kinds of social media posts we like and reshare, doing the extra work to check the provenance of the videos and images we’re fed, and holding wrongdoers publicly accountable when they’re caught seeding AI-generated disinformation. It’s not just a dirty trick, it’s an assault on the very foundations of democracy. If we’re going to successfully defend ourselves from this coordinated attack, we’ll need to reach across political and social divides to work in our common interest, and each of us will need to do our part.

view more: next ›