top of page
Search

The AI Delusion: An Unbiased General Purpose Chatbot

Can AI ever be unbiased? 


As AI systems become more integrated into our daily lives, it's crucial that we understand the complexities of bias and how it impacts these technologies. From chatbots to hiring algorithms, the potential for AI to perpetuate and even amplify existing biases is a genuine concern. 


Bias refers to a prejudice or inclination towards a particular perspective, often at the expense of alternative viewpoints. It's a concept that is deeply ingrained in the human experience.


Humans are full of bias. 


Our world is shaped by a combination of objective and subjective truths, each influencing our perceptions and decision-making processes. Bias can manifest in various forms, from the explicit and intentional to the implicit and unconscious. 


Explicit biases are those that are overtly expressed, such as a preference for a particular political party or a belief in the superiority of one race over another. Implicit biases, on the other hand, are those that we may not even be aware of, but that nonetheless shape our attitudes and behaviours. These biases can be influenced by a wide range of factors, including our upbringing, cultural background, and personal experiences.


To understand the nature of bias, it's important to distinguish between objective and subjective truths. Objective truths are those that can be verified independently of personal opinions. There are four main types of objective truths:


1. Mathematical and logical truths (e.g., 2 + 2 = 4)

2. Scientific facts (e.g., water boils at 100 degrees Celsius at sea level)

3. Empirical observations (e.g., the sky is blue)

4. Tautologies (e.g., a bachelor is an unmarried man)


While these truths are considered objective, it's important to note that even some of them can be subject to debate and interpretation. Court trials often involve the contestation of empirical evidence and philosophers like Descartes have questioned the reliability of sensory observations. Scientific facts, while based on empirical evidence, can also be subject to revision as new evidence emerges or as our understanding of the world evolves.


Subjective truths are those that are based on personal opinions, beliefs, and experiences. These truths are shaped by factors such as culture, upbringing, and education, making them inherently biased. For example, the statement "chocolate is the best flavour of ice cream" is a subjective truth, as it is based on individual taste preferences. Similarly, political ideologies, religious beliefs, and moral values are all examples of subjective truths that vary across individuals and cultures.


Given the pervasiveness of subjectivity in human experience, it's clear that the quest for unbiased AI is a complex and likely unattainable goal. As long as AI systems are created and trained by humans, they will inevitably inherit some of our biases, whether conscious or unconscious.


This raises the question: whose bias will AI represent? 


Totalitarian and authoritarian governments are likely to create AI systems that align with their ideological biases, while even democratic countries may seek to control the bias of AI, for example to protect certain groups or limit hate speech. Completely eliminating bias in AI seems to be an impossible task.


The data used to train AI systems can itself be biased, reflecting historical and societal inequalities. For example, if an AI system is trained on job application data from a company with a history of discriminatory hiring practices, the AI may learn to perpetuate those biases in its own decision-making.


What can we do in the face of this inevitability? 


AI companies have work to do. 


Developing and deploying AI systems should be accompanied by robust ethical frameworks and guidelines. These frameworks should prioritise transparency, accountability, and fairness, ensuring that the biases within AI systems are identified, acknowledged, and mitigated to the greatest extent possible. This involves diverse and inclusive teams in the development process, rigorous testing and auditing of AI systems for bias, and ongoing monitoring and adjustment as these systems are deployed in real-world contexts.


What can we users do?


The answer lies in what we ask of AI systems and how we choose to integrate them into our decision-making processes. By being aware of the potential biases embedded within AI, we can formulate our questions and requests in a manner that challenges and counteracts these biases. This requires a critical and reflective approach to engaging with AI, one that acknowledges the limitations of both human and artificial intelligence in attaining true objectivity.


When prompting an image generation tool, be specific about what image you need. A prompt for an image of a doctor might produce an image of a white male. If you don’t want a white male doctor then be specific in your prompt. Check out my PREPARE guidelines for writing specific prompts.


When talking to a tool such as ChatGPT, you might consider including something like:"Make an effort to consider the question from a wide range of perspectives and lived experiences. Be as objective and unbiased as possible. If the question touches on issues where there are differing views, acknowledge those views even-handedly and discuss them from a neutral stance. Note any limitations or gaps in available information that may impact the comprehensiveness of the response."


The aim isn’t for your response to be 100% free from bias, as this may be impossible, but to help you evaluate the response as best you can.


Addressing the issue of bias in AI requires a broader societal conversation about the values and principles we want these systems to embody. This conversation should involve stakeholders from all sectors, including government, industry, academia, and civil society. By engaging in open and transparent dialogue, we can work towards developing AI systems that align with our shared values and promote the common good.


This dialogue should also extend to the individuals and communities most affected by AI bias. Those who have been historically marginalised and discriminated against should have a seat at the table in shaping the future of AI. Their experiences and perspectives are crucial in identifying and addressing the ways AI can perpetuate and amplify existing inequalities.


We must invest in education and training to help individuals develop the critical thinking skills necessary to navigate a world increasingly shaped by AI. This includes teaching people how to identify and question the biases present in AI systems, and how to advocate for more equitable and inclusive technologies.


The quest for unbiased AI is a complex and perhaps unattainable goal. As long as AI systems are created and trained by humans, they will inevitably inherit some of our biases, whether conscious or unconscious. However, by recognizing the interplay of objective and subjective truths in our world, and by actively engaging with AI in a manner that challenges its biases, we can strive towards more balanced and equitable outcomes.


The future of AI lies not in the elimination of bias, but in our ability to navigate and mitigate its impact through critical thinking, responsible engagement, and ongoing dialogue. As we continue to develop and integrate AI systems into our lives, it's crucial that we remain vigilant and proactive in addressing the ethical implications of these technologies.


By working together and staying committed to the principles of transparency, accountability, and fairness, we can create a future in which the benefits of AI are shared by all. This will require hard work, difficult conversations, and a willingness to challenge the status quo. But if we approach this challenge with empathy, humility, and a commitment to justice, we can build an AI-powered world that truly works for everyone.


Summary: 5 Actionable Steps

  1. Be specific in your AI prompts to counteract potential biases.

  2. When asking an AI a question, prompt it to consider diverse perspectives, be objective, acknowledge differing views, and note limitations.

  3. Develop your own critical thinking skills to identify and question biases in AI systems.

  4. Help your students to develop their critical thinking skills to identify and question biases in AI systems.

  5. Approach AI with the understanding that mitigating bias is an ongoing process requiring responsible engagement from all stakeholders.

Comments


bottom of page