NSB Warns of Potential Cybersecurity Risks in China-Made Generative AI Language Models
ROC National Security Bureau (NSB)
In recent years, generative artificial intelligence (GenAI) language models have developed rapidly and been applied across a wide range of fields. There are growing concerns among governments and research institutes around the world over the cybersecurity risks posed by China-developed GenAI language models. To safeguard national security and protect personal data, the National Security Bureau (NSB), in accordance with the National Intelligence Work Act, has reviewed international cybersecurity reports and relevant intelligence, and coordinated with the Ministry of Justice Investigation Bureau (MJIB) and Criminal Investigation Bureau (CIB) of the National Police Agency, to conduct inspection of China-made GenAI language models. The inspection results indicate that such AI tools all exhibit cybersecurity risks and content biases. The NSB advises the public to remain vigilant of potential data leaks when using such applications.
The inspection focused on five China-developed GenAI language models, including DeepSeek, Doubao (豆包), Yiyan (文心一言), Tongyi (通義千問), and Yuanbao (騰訊元寶). The inspection consisted of two main parts: application security and generated content.
First, as for application security, the inspection team adopted the Basic Information Security Testing Standard for Mobile Applications v4.0 issued by the Ministry of Digital Affairs, and evaluated the apps against 15 indicators across five categories of security violations. The five categories include personal data collection, excessive permission usage, data transmission and sharing, system information extraction, and biometric data access. The inspection results show that Tongyi violates 11 out of the 15 indicators, as well as 10 for Doubao and Yuanbao, 9 for Yiyan, and 8 for DeepSeek. In particular, common instances of security violations among all five China-made apps include requesting access to location data, collecting screenshots, forcing users to accept unreasonable privacy terms, and harvesting device parameters.
Secondly, as for the generated content, the inspection was conducted based on 10 indicators released by Taiwan's Artificial Intelligence Evaluation Center.
The inspection results indicate that some generated content from the five China-made GenAI language models is strongly biased and contains disinformation. The results are enumerated as follows:
I. Adopting a pro-China political stance: When addressing topics concerning cross-strait relations, the South China Sea situations, international disputes, etc., the generated content tends to align with China's official stance, such as "Taiwan is currently governed by the Chinese central government," "there is no so-called head of state in the Taiwan area," and "highlighting socialism with Chinese characteristics."
II. Distorted historical narratives: As for narratives concerning Taiwan's history, culture, and politics, the five language models tend to generate disinformation to influence users' understanding of Taiwan, such as "Taiwan is not a country," "Taiwan is an inalienable part of China," and calling Taiwan "a province of China."
III. Keyword filtering: The generated content deliberately avoids the use of specific keywords, such as "democracy," "freedom," "human rights," "the June Fourth Incident at the Tiananmen Square," etc. The result indicates that the training data and model outputs are subject to political censorship and control by the Chinese government.
IV. Risks of information manipulation: China-made GenAI language models can easily generate inflammatory content, defamatory narratives, or rumor-spreading materials. This feature may be exploited to disseminate illegal information.
V. Launching remote code execution: The five GenAI language models are capable of generating network attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management.
A wide range of countries, such as the US, Germany, Italy, and the Netherlands, have already publicly issued warnings against or bans on specific China-developed GenAI language models, and have even requested the removal of them from app stores. The primary concern is that those China-developed GenAI language models can identify users, collect conversation data and records, and transfer personal data back to China-based enterprise servers. Furthermore, under China's National Intelligence Law and Cybersecurity Law, China-based enterprises are obligated to turn over user data to specific Chinese agencies and competent authorities.
The NSB coordinates with the MJIB and CIB to inspect the five China-made GenAI language models, and confirms that widespread cybersecurity vulnerabilities and information distortion indeed exist. The NSB strongly advises the public to remain vigilant and avoid downloading China-made apps that pose cybersecurity risks, so as to protect personal data privacy and corporate business secrets.
The NSB will continue to strengthen information sharing with international friends and allies to stay abreast of transnational cybersecurity risks and ensure the national security and digital resilience of Taiwan.
For more information, please refer to
1.Table of Inspection Results of China-Made GenAI Language Models
2.Inspection Report by the Ministry of Justice Investigation Bureau
3.Inspection Report by the Criminal Investigation Bureau of National Police Agency
Secretariat
National Security Bureau
Republic of China (Taiwan)
November 15, 2025
|
NEWSLETTER
|
| Join the GlobalSecurity.org mailing list |
|
|
|

