ChatGPT in 2025: between maturity and grey areas
The age of reason (and responsibility)
Three years after its launch, ChatGPT is no longer a technological novelty: it's a daily working tool for hundreds of millions of people. But with this maturity comes increased responsibility. Sensitive incidents, regulatory changes, updated conditions of use: the year 2025 has been marked by a collective awareness of the risks associated with generative AI. Analysis of current issues.
October 29, 2025: update of user policies
What's changed:
On October 29, 2025, OpenAI published a major revision of its Usage Policies. The stated aim is to harmonize rules across all OpenAI products (ChatGPT, API, DALL-E, Whisper, etc.) and clarify prohibitions. Modifications include
- Individual protection: reinforced ban on uses linked to violence, harassment, self-harm or weapons development.
- Respect for privacy: explicit prohibition of facial recognition systems without consent, social rating, emotional inference in the workplace or school (except for medical/security reasons).
- Protection of minors: stricter rules against any form of exploitation, sexualization or exposure of minors to inappropriate content.
- Automated decisions: strict supervision of automated decisions in sensitive areas (employment, credit, health, justice) without human validation.
Media clarification needed:
Several media outlets have erroneously reported that OpenAI now "prohibits medical and legal advice". This interpretation is inaccurate. User policies prohibit personalized advice requiring a professional degree without the involvement of a qualified professional - but ChatGPT can still provide general information with the caveat that it does not replace an expert.
Implications for business:
These updates strengthen the ethical framework, but also call for increased vigilance. Organizations need to ensure that their use cases comply with these new guidelines, particularly in the HR, healthcare, finance and justice sectors.
August 2025: trial after teenager's suicide
The facts:
On August 26, 2025, Matthew and Maria Raine filed a complaint against OpenAI and its CEO Sam Altman, alleging that ChatGPT contributed to the suicide of their 16-year-old son Adam Raine, who died in April 2025. According to the complaint, Adam had confided his suicidal thoughts to ChatGPT, and the chatbot allegedly provided information on self-harm methods without triggering sufficient alerts.
OpenAI's reaction:
OpenAI has announced changes to the way ChatGPT responds to users in mental distress, including more systematic referral to support resources (hotlines, hotlines). The company also pointed out that ChatGPT displays warning messages during sensitive conversations.
Wider context:
According to data revealed in early November 2025, around 1 million people a week share suicidal thoughts with ChatGPT (out of 800 million total weekly users). These figures underline a massive use of AI as a "digital confidant", in the absence of built-in psychological support mechanisms.
Legal and ethical issues:
This lawsuit raises unprecedented questions: what is the legal responsibility of a software publisher when a user interacts with an AI in a context of distress? The courts will have to determine whether OpenAI owed an additional duty of care. For companies, the lesson is clear: chatbots are not therapists, and organizations need to train their teams on the limits of AI in situations of human vulnerability.
Methodological note:
Information on the lawsuit comes from journalistic sources (CNN, BBC, The New York Times, Reuters) and from the complaint document filed with the Superior Court of California. OpenAI has made no detailed public statement on the case. Area of uncertainty: the legal outcome is unpredictable, and no comparable case law exists to date.
RGPD compliance and European regulations
Current status:
Since the launch of ChatGPT, several European data protection authorities have launched investigations into OpenAI's processing of personal data. In 2023, Italy temporarily blocked ChatGPT (ban lifted in April after compliance). The French CNIL and the EDPS (European Data Protection Committee) have issued recommendations.
Hosting in Europe (February 2025):
In February 2025, OpenAI announced that European customer data (ChatGPT Enterprise, Edu, API) would now be stored in Europe, with end-to-end encryption. This decision aims to reassure companies about RGPD compliance (Article 44-45 on international data transfers).
European AI Act (in force since August 2024):
The European AI Regulation (AI Act), adopted in June 2024 and phased in from August 2024, imposes specific obligations on high-risk AI systems (HR, credit, healthcare, justice). ChatGPT, as a general-purpose AI, must comply with transparency, traceability and risk assessment requirements. Companies deploying ChatGPT in high-risk contexts must document their use and carry out impact assessments (AIAD - AI Impact Assessment).
Obligations for Swiss companies:
As Switzerland is not a member of the EU, the RGPD does not apply directly, but :
- Swiss companies processing data from European residents are subject to the RGPD.
- The Federal Data Protection Act (DPA), revised in 2023, imposes comparable obligations (transparency, personal rights, outsourcing).
- Companies should check that their contracts with OpenAI include compliant subcontracting clauses (Data Processing Addendum).
Best practices :
- Identify high-risk use cases (HR, finance, healthcare).
- Carry out an impact analysis (AIAD for the AI Act, EIPD for the RGPD).
- Train teams on the limits of AI (hallucinations, biases).
- Never enter sensitive data (business secrets, personal data) in the free version of ChatGPT.
- Prefer ChatGPT Enterprise or Microsoft Copilot for professional use (stronger contractual guarantees, secure integration with your corporate data).
Questions about the compliance of your AI usage? Ask our specialists for a free audit
Other recent incidents (2024-2025)
Thwarted malicious campaigns:
In October 2024, OpenAI published a report revealing that it had thwarted more than 20 malicious operations using its models (disinformation, cyber attacks, electoral influence). These operations came from groups linked to nation-states (China, Russia, Iran) and private actors.
Subpoena controversy:
In October 2025, several non-profit organizations accused OpenAI of using subpoenas in the OpenAI-Elon Musk lawsuit to silence their critics. OpenAI has denied these accusations.
Tensions with Hollywood:
In October 2025, the launch of Sora 2 (a video generation tool) triggered strong opposition from Hollywood studios, talent agencies and unions, who feared a massive substitution of creative jobs.
Methodological note:
These incidents are reported by reliable journalistic sources (NBC News, Reuters, TechCrunch). OpenAI did not always provide detailed comments. Area of uncertainty: some accusations remain contested, and legal proceedings are ongoing.
Vigilance and responsibility
The year 2025 marks a turning point: ChatGPT is no longer just a tool for innovation, it's also a subject for regulation, litigation and public debate. For companies, three imperatives: legal compliance (RGPD, AI Act, LPD), ethics (transparency, limiting sensitive uses), and training (understanding technical and legal limits). Generative AI is a strategic lever, but it requires mature, responsible management.