Italy's data protection authority has implemented a ban on Chinese AI company DeepSeek, citing a lack of clarity regarding the company’s handling of user data. This decision follows a request by the Garante, Italy's data privacy regulator, for detailed answers regarding DeepSeek’s data collection practices and the origins of its training data.

The Data Collection Inquiry

The Garante sought to understand several aspects: what personal information DeepSeek gathers via its online and mobile platforms, the sources of this data, the purposes for which it is used, the legal permissions obtained, and whether this data is stored in China.

On January 30, 2025, the Garante announced its decision to block DeepSeek's operations after receiving responses deemed "completely insufficient" from the company. Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the organizations behind the service, claimed they do not operate within Italy and are not subject to European laws.

Immediate Block of Service

As a consequence, Italy has halted access to DeepSeek and initiated a formal investigation into the matter. This move by the Garante is reminiscent of a temporary restriction placed on OpenAI's ChatGPT in 2023, which was lifted after OpenAI addressed several privacy issues and was subsequently fined €15 million for mishandling personal data.

Popularity and Controversy

Despite the ban, DeepSeek has seen immense popularity this week, hitting the top of mobile app download charts with millions of users. Nevertheless, the service has been under scrutiny for issues including targeted "large-scale malicious attacks" and concerns regarding its privacy policies, censorship aligned with Chinese propaganda, and national security implications.

Vulnerabilities and Exploitations

DeepSeek's AI-based large language models (LLMs) have shown vulnerabilities to clever exploitation techniques. These include Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now (DAN), and EvilBOT, which enable unauthorized use for creating harmful or restricted content.

Palo Alto Networks Unit 42 reported that such exploits can produce harmful outputs, from instructions to create Molotov cocktails to malicious code like SQL injections. While DeepSeek's initial safeguards seem effective, persisting probes often lead to revealing security lapses, proving these models could be manipulated for malicious intents.

Ethical and Security Implications

Further assessment of the DeepSeek-R1 reasoning model by AI security firm HiddenLayer shows threats from prompt injections and unintentional data leakage through its Chain-of-Thought reasoning. Occurrences hinting at unauthorized use of OpenAI's data have also emerged, raising ethical and legal issues regarding originality and data usage.

Broader Context

Simultaneously, OpenAI’s ChatGPT-4 faced the Time Bandit vulnerability, now addressed, which could be used to bypass AI safeguards using strategic historical prompts. Similarly, GitHub's Copilot AI showed susceptibility to prompts starting with "Sure," enabling the generation of unauthorized code, and had a proxy configuration flaw related to access bypassing.

The link has been copied!