Italy’s data protection authority, the Garante, has banned Chinese AI company DeepSeek over concerns about its data privacy practices. The move, announced on January 30, 2025, follows an investigation into DeepSeek’s data handling practices, specifically regarding the collection and storage of personal data.
DeepSeek Fails to Address Data Privacy Concerns
The Garante questioned DeepSeek about its data collection, the sources of its training data, and how it processes users’ personal information. However, the company provided insufficient responses, prompting the regulator to block access to its platform in Italy. The Garante also launched an investigation into DeepSeek’s operations.
The company behind DeepSeek, Hangzhou DeepSeek Artificial Intelligence, and its Beijing counterpart, Beijing DeepSeek Artificial Intelligence, argued that they do not operate in Italy and that European data protection laws do not apply to them. Despite this claim, the Garante moved forward with the ban to protect Italian users from potential risks to their privacy and security.
Italy’s Previous AI Ban and Data Privacy Concerns
This ban follows a similar situation in 2023, when Italy temporarily suspended OpenAI’s ChatGPT over data privacy concerns. However, after OpenAI addressed the regulator’s concerns, the suspension was lifted in April 2024. OpenAI was later fined €15 million for its handling of personal data.
DeepSeek Faces Growing Security and Ethical Challenges
DeepSeek has seen a surge in popularity, with millions of users driving its rise in mobile app downloads. However, the company is also facing significant challenges, including cyberattacks on its platform. To address these security vulnerabilities, DeepSeek rolled out an update on January 31, 2025.
Vulnerabilities in DeepSeek’s AI Models
DeepSeek’s large language models (LLMs) have drawn the attention of security experts for being vulnerable to “jailbreak” techniques. These exploits allow attackers to bypass safeguards and generate harmful content, including malicious code and dangerous instructions. According to Palo Alto Networks’ Unit 42, the LLMs can be manipulated by specially crafted prompts to produce risky outputs.
Security analysis by HiddenLayer also revealed vulnerabilities in DeepSeek’s DeepSeek-R1 reasoning model. The model is prone to prompt injection attacks and may unintentionally leak sensitive information due to its Chain-of-Thought (CoT) reasoning mechanism.
Furthermore, the investigation uncovered that DeepSeek’s models may have used data from OpenAI, raising concerns about intellectual property violations and the ethical sourcing of training data.
Jailbreak Vulnerabilities Affect Other AI Models
DeepSeek is not alone in facing security vulnerabilities. Similar jailbreak issues have been found in other prominent AI models, such as OpenAI’s ChatGPT and Alibaba’s Qwen 2.5-VL. These vulnerabilities allow attackers to bypass safety protocols by using specific phrases or words like “Sure,” which make the systems more compliant and prone to generating harmful content.
GitHub’s Copilot, a popular coding assistant powered by AI, has also been found to have similar weaknesses. By exploiting these vulnerabilities, attackers can bypass restrictions and generate unethical or malicious code.