We distinguish three main themes: ethical issues in training AI models, risks of AI usage, and major societal questions.
1. Ethical issues in training AI models
Data collection and usage
AI models are often trained on data collected without the explicit consent of the owner or creator. This raises questions about intellectual property: Which texts, images, and videos may be used? Should creators be compensated when their work is used? Previously, The New York Times sued OpenAI for using journalistic work to train AI models without permission.
Representation and bias
The data on which AI is trained reflects societal biases from the past and present. For instance, facial recognition systems make more errors when identifying people with darker skin tones, potentially leading to incorrect categorization or labeling – with all its consequences.
Environmental impact
Training large AI models requires enormous computing power and energy, resulting in a significant CO₂ footprint. The production of computer chips also has a considerable environmental impact. Weigh the benefits of AI against its ecological costs.
Want to know more about how to consciously use generative AI? Read the article about AI's environmental impact here (opens in new tab).
Transparency among providers
AI is often a ‘black box’: even developers do not always understand how AI models arrive at certain outcomes. The AI Act imposes a transparency requirement (opens in new tab) on providers. They must provide information about an AI model and clarify when the public is interacting with AI. The consequences of the ‘black box’ nature of AI can be severe, such as when an AI model recommends the wrong medical treatment, and no one can explain why.
Exploitation of workers
Many workers spend their days training and improving AI models. These are often poorly paid laborers working under harsh conditions. In 2023, it was revealed that Kenyan employees were paid less than $2 per hour to review extremely harmful content to make language models safer.
2. Risks of using AI
Hallucinations and unreliable outcomes
AI can generate answers that are factually incorrect, also known as hallucinations. Language models are trained to answer any question, even when the answer is unavailable. This happens less frequently now, but when it does, the consequences can be problematic. For example, an American lawyer thought he could save time by using ChatGPT to prepare his case. He ended up citing non-existent legal precedents, for which the judge reprimanded him.
Replacing human labor
AI systems are increasingly automating tasks previously performed by humans. Getty Images has filed lawsuits against AI companies for using their stock photos. Illustrators are already noticing a decline in assignments due to the rise of AI-generated images. Similarly, (voice) actors are sometimes replaced by AI-generated voices and digital clones.
Privacy
AI technologies enable new forms of surveillance. Festivals and other large events use smart cameras to analyze and optimize visitor flows. While this enhances safety, it can also be perceived as an invasion of privacy.
What about the information you share with an AI tool? It is stored in a database, even if it involves personal data or sensitive business information.
Want to know what it takes to work GDPR-compliantly? Read this article to learn more about what data you can collect (opens in new tab) and how to use it safely to reach your audience.
Dependency and loss of skills
When AI takes over tasks previously performed by humans, certain knowledge, skills, and expertise diminish. For instance, experienced editors can create a compelling newspaper and craft engaging headlines due to years of practice. If these tasks are outsourced to AI, the results might be comparable, but no one develops those skills anymore.
Intentional spread of disinformation and manipulation
Generative AI tools can be used to create convincing fake news. For example, a deepfake video of Ukrainian President Zelensky called for surrender to Russia. Similarly, Americans were tricked by a voice resembling Joe Biden, urging them not to vote. Read more about deepfakes and the AI Act in this article. (opens in new tab)
Copyright and portrait rights
Who owns content generated by AI? Legally, this is a gray area, creating a complex situation for creative professionals. They must take action to ensure their work is not included in language models. Learn how to work with AI tools while retaining ownership of your (artistic) work in this article (opens in new tab).
Transparency about AI usage
The European AI Act requires transparency about AI usage. Before this legislation came into effect, photographer Boris Eldagsen won the Sony World Photography Award with a photo. He refused the prize because the photo was created with AI. He aimed to make a statement and spark a discussion about AI's impact on photography.
3. Major societal questions
Responsibility and liability
Who is responsible for any damage caused by AI? In a fatal accident involving a self-driving Uber car in 2018, it was unclear who was liable: the safety driver, Uber, or the AI developers? Channel 1 aims to be the first news channel entirely run by AI. Who is liable if the news is inaccurate?
Concentration of power
Developing language models for AI tools requires significant investments. As a result, power mainly lies with a few large – often American – tech companies like OpenAI, Google, Microsoft, and xAI, which dominate the European market. What are the implications if a limited number of organizations dictate AI developments?
Human autonomy
AI systems are making more and more decisions on our behalf. What does this mean for our human autonomy? Which decisions should AI be allowed to make, and which should it not? The Cambridge Analytica scandal demonstrated how social media data and algorithms can influence our voting behavior.
How does your organization handle AI? Create an AI policy for your organization using this step-by-step guide.
How does your organization handle AI?
Get started right away with practical tools. Use a step-by-step guide to create an AI policy that aligns with your organization's vision and values.










