Sonntag, 15.03.2026 14:54 Uhr

Ethical AI and Politics. The Need for Conscious and Respon

Verantwortlicher Autor: Flavio Gorni Journalist, 22.08.2025, 18:15 Uhr
Fachartikel: +++ Internet und Technik +++ Bericht 10556x gelesen

Journalist [ENA] In recent months, artificial intelligence — and particularly ChatGPT — has become part of the everyday language of politicians, administrators, and journalists. According to unofficial estimates, more than 60% of politicians now rely on AI systems to draft press releases, speeches, or institutional texts, although only about 5% openly admit it. Among the most widespread and recognized “ethical” models are ChatGPT

Claude, and Gemini. These tools are now widely used by institutions such as the European Union, UNESCO, and several leading universities worldwide. Their success is rooted in a key principle: ensuring accurate, respectful texts, free from provocative or discriminatory content. Yet, one underestimated risk remains: when a neutral text produced by an ethical AI is later modified by humans. It is often in this process that sentences conveying superiority, aggression, or authoritarian tones appear. In such cases, responsibility does not lie with the machine — which is designed to maintain an ethical framework — but with the person who chooses to alter the message for political gain or personal gratification.

This distinction is essential: attributing to AI the words that were added by a human is a cultural and communicative error. Artificial intelligence is not meant to replace critical thinking, but to support it. And for that reason, its use must be conscious and transparent. The issue extends beyond politics, affecting journalism, education, and public communication at large. Ethical AI was created to improve the quality of language, assist writers, and simplify complex processes — not to legitimize authoritarian slogans or reinforce power dynamics.

When used responsibly, AI can help build trust between institutions and citizens, foster transparent dialogue, and promote messages that unite rather than divide. Conversely, careless or manipulative use risks undermining not only the credibility of those who communicate, but also public perception of artificial intelligence itself. In an increasingly interconnected and digital world, the real question is not “whether” to use AI, but how to use it in an ethical and responsible way. Acknowledging that mistakes come from humans — not from machines — is the first step toward truly valuing the potential of these technologies. Article created in collaboration with ChatGPT (Lumi).

Für den Artikel ist der Verfasser verantwortlich, dem auch das Urheberrecht obliegt. Redaktionelle Inhalte von European-News-Agency können auf anderen Webseiten zitiert werden, wenn das Zitat maximal 5% des Gesamt-Textes ausmacht, als solches gekennzeichnet ist und die Quelle benannt (verlinkt) wird.
Zurück zur Übersicht
Info.