ETHICS AND AI FOR IMPACT: FAQ

Forget the computer science, have you ever tried untangling the ethics of Artificial Intelligence?

We pooled all our ethics questions, researched them – and here’s the result:

an epic FAQ on the ethics of AI!

This article covers general ethical questions about AI, oriented specifically towards our community of nonprofits, NGOs, charities and social enterprises. If you have a question that’s not answered here, please add it in the comments. We want to make this list as useful as possible for everyone. Thanks!

SUBJECTS COVERED BELOW:

Privacy and data handling
Bias
Imagery
Safeguarding vulnerable community members
Taking people’s jobs
Stealing human intellectual property
Inaccuracy & false information
Using AI for schoolwork
Wasting our time and attention
Decision making for organisations

Privacy and data handling

What happens to the information I put into an AI chat box? 

You should assume that whatever you feed into an AI platform goes into that company’s servers and is used to train their models further. This means that if you feed in a text you’ve written, that can be used to generate answers to other people’s questions. Do not put sensitive information, intellectual property, personal contact details into an AI platform.

 

Could I accidentally infringe against the GDPR? 

Yes. For example, if you paste people’s contact details or personal data into an AI chat bot, you could be infringing GDPR regulations.

Bias

What do concerns of AI bias actually mean?

Bias happens when an AI system produces results that are systemically prejudiced or inaccurate. Bias comes from various factors connected with how the AI is built and trained – including assumptions made during the development process, prejudices in the training data (e.g. millions of unsifted online articles or internet posts, which may not represent actual reality or sentiment), or design errors.

Am I at risk of being fed biased information by an AI chat bot? How can I know what to trust?

Yes, chat bot answers (like ChatGPT) can be biased. This article from Brookings explains some of the political bias found in ChatGPT (May 2023). Always check the information served by an AI chatbot before you build beliefs or arguments around it.

However, AI tools could also help you improve your or your organisation’s decision making by removing or reducing human bias. As this 2019 article in Harvard Business Review explains: “Machine learning systems disregard variables that do not accurately predict outcomes (in the data available to them). This is in contrast to humans, who may lie about or not even realize the factors that led them to, say, hire or disregard a particular job candidate. It can also be easier to probe algorithms for bias, potentially revealing human biases that had gone unnoticed or unproven.”

Am I at risk of contributing to the bias within AI systems, based on what prompts I feed it?

Your prompts may be used by AI companies to refine their models. So potentially, yes, whatever you feed into an AI system does contribute – in a very small way – to how the machines analyse future data. But even heavy AI users can contribute such a small proportion of the overall language that an AI system analyses for its outputs, so the impact is likely to be negligible.

One way you can help reduce bias in commonly used chat AIs, is to fill in the feedback surveys and flag any instances of inaccuracy or bias that you spot. This information is also – according to the companies that own them – used to refine and improve the chat systems.

Imagery

Where do AI images come from? Are they ‘real’ faces?

AI image generators don’t paste real faces into their designs. They make the images based on millions of images they (generally) scraped off the internet.

But there are still ethical concerns over copyright. The founder of Midjourney, for example, admitted to media that the images fed into its models were not used with the consent of the original human creators of the art – millions of images. A team of artists has brought lawsuits against AI image generators, Midjourney and Stable Diffusion, on this basis.

Is it ethical to use AI-generated images to illustrate my organisation’s work?

Image generating platforms can create photos that are so accurate, they look real. This means you can imagine almost any scene and, with some careful work on what prompt you feed the AI, get an image that makes it look like that thing actually happened. This could be unethical because it can mislead people into thinking that your organisation has done something you haven’t – by showing what looks like real photos of something that didn’t happen.

If you are satisfied with the issues around copyright and intellectual property (see above), you could consider using AI for illustrations, rather than photos.

Safeguarding vulnerable community members

Are there controls in place to prevent AI chat / image generators from serving up shocking, inappropriate content for our communities of vulnerable users / children?

The mainstream AI companies all have policies in place to avoid inappropriate content such as violence, pornography, or abusive or threatening language or images. Most AI users have never been served inappropriate content and there are mechanisms you can use to report it, if so.

However, the risk remains that a platform’s filters might not catch something you feel is inappropriate. So it is wise to select your platform carefully, and where possible, to supervise children or vulnerable users when using live AI systems.

Despite these protections, it’s also important to know that AI tools are being used inappropriately outside of the mainstream – for example, this Washington Post article (June 2023) reveals that AI-generated child pornographic content is rife on the dark web. They cite US Justice Department officials who combat child exploitation, saying “such images still are illegal even if the child shown is AI-generated.” This position is yet to be tested with actual cases brought.

Should children be allowed to access AI platforms?

A growing number of education technology (edtech) companies are using AI systems to power chatbots or other personalised education software that children are already using in many countries around the world. Parents and teachers may wish to ask their schools what AI systems are in use and investigate their level of trust and comfort with this.

UNICEF’s AI for Children project has published policy guidance outlining recommendations for building AI policies and systems that uphold child rights. This rich guidance offers nine requirements for child-centered AI:

  • Support children’s development and well-being
  • Ensure inclusion of and for children
  • Prioritize fairness and non-discrimination for children
  • Protect children’s data and privacy
  • Ensure safety for children
  • Provide transparency, explainability, and accountability for children
  • Empower governments and businesses with knowledge of AI and children’s rights
  • Prepare children for present and future developments in AI
  • Create an enabling environment

The policy guidance is complemented by guides for parentsteens, and organisational leadership.

You may also want to check out the privacy policy of AI platforms you’re considering using with children. For example, here’s a quote from the OpenAI privacy policy: “OpenAI does not knowingly collect Personal Information from children under the age of 13. […] If you are 13 or older, but under 18, you must have consent from your parent or guardian to use our Services.”

Taking people’s jobs

Should I consider boycotting AI in order to safeguard real human jobs?

The World Economic Forum stated in April 2023 that they expect AI to drive a net loss of 14 million jobs by 2027. (Cited, for example, in this CNN article.) While companies will need to create jobs to develop and run AI tools (e.g. data analysts and scientists, machine learning specialists and cybersecurity experts), they will also eliminate jobs in, for example, record-keeping and administrative areas, like data entry clerks and executive secretaries.

The impact of boycotting AI on these grounds may not be enough to reverse the economically driven trend towards job automation. For many of us, the ability to use AI tools will be crucial in our future employment. It may make sense to develop these skills as part of your/your organisation’s professional development frameworks, in order to future-proof your community for employment.

Other ways to safeguard human workers’ wellbeing in an AI world could include paying a living wage, reducing work hours, and connecting salaries to impact achieved, rather than hours worked.

Stealing human intellectual property

Do AI image generators steal human creatives’ work?

AI image generators don’t paste original existing artworks into the images they make. They make the images based on millions of images they (generally) scraped off the internet.

But there are still ethical concerns over copyright. The founder of Midjourney, for example, admitted to media that the images fed into its models were not used with the consent of the original human creators of the art – millions of images. A team of artists has brought lawsuits against AI image generators, Midjourney and Stable Diffusion, on this basis.

Who is the ‘artist’ of an AI-created image or text? 

This philosophical and legal question remains as-yet unresolved in law or in culture. Prompt engineers could be considered artists, using considerable skill to instruct AI systems on how to create a story or image to their specifications. Does this mean that the human owns the intellectual property rights? It’s difficult to find a clear and credible answer to this question at the time of writing, but this analysis by the University of Portsmouth School of Law covers a lot of the questions and possible answers (from a UK legal perspective).

Inaccuracy & false information

What ethical and safety concerns arise from having AI-driven algorithms within social media or other content platforms?

UNICEF’s AI policy guidance summarises this really well:

“AI-driven recommendations for news stories, online community groups, friends and more are based on profiling – they feed people content based on their preferences, creating thought filter bubbles. AI can also be used to amplify disinformation and bias, endangering [people’s] ability to develop and to express themselves freely.”

AI in education

Is it ethical for school or university students to use AI tools to help them with schoolwork?

Plagiarism is always unacceptable, and teachers are using more tools to identify whether students’ work has been copied from online or other sources.

Whether AI can play a role in students’ research, grammar checking, and other elements, is a judgement call for individuals and schools. One way of looking a the question is to say that using AI tools like ChatGPT, Bard and Bing for research is part of a 21st century skill-set that is beneficial to students.

However, students in this podcast from The Daily reach the conclusion that AI generated answers hold them back from learning the very skills they wanted to acquire at university, especially critical thinking.

Clearly students must comply with the rules of their institution or education system, and if AI-generated text is banned in coursework, for example, then they should comply with that and explore AI in other contexts.

How can I integrate AI into my work in education, ethically and safely?

Check out the European Union’s guidelines, published in September 2022, to help school leaders and teachers decide how to use AI in schools and universities.

They consider that safe AI must cover the following components:

  • Human agency and oversight
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Privacy and data governance

What does AI mean for the education system as a whole?

Market intelligence firm HolonIQ concludes from a 2023 survey of the AI/edtech industry, that “AI is expected to have most impact on Testing and Assessment, followed by Language Learning, Corporate Training/ Upskilling and Higher Education.”

While there is enormous potential for AI to drive efficiency and quality improvements throughout the education sector, such tools require careful policy and planning to avoid some of the negative effects explored in this article.

Wasting our time and attention

Am I going to get real value out of AI tools, or am I just generating material for tech companies to train their models on, while not actually getting more efficient or higher quality work done?

Our friend Bilal Ghalib, prolific AI tinkerer, instructor and co-founder of Bloom, concludes that “the cost of using AI to make an impact often equals the cost of carrying out the work by ourselves.”

“It seems that AI for Impact is more a tool for enhancing quality, boosting creativity, and overcoming challenges rather than creating content to publish. It’s a valuable asset to augment human creativity, not replace it.”

Decision making for organisations

Are there any frameworks we can use to assess how and when we should use AI?

Try the RAFT framework For Ethical AI.

… that’s a wrap!

That concludes our epic AI Ethics FAQ!

Please add any other questions in the comments below, and we’ll be pleased to include them.

Would you like purpose-driven, ethically aligned support with your AI strategy, impact goals, communications, or leadership skills? AMS is a communications and strategy studio for impact-driven organisations, like charities, NGOs, nonprofits, social enterprises, and more.

We keep costs low and impact high. Reach out if you’d like to chat anytime.