Free to follow every thread. No paywall, no dead ends.
Ethics of artificial intelligence | HearLore
— Ch. 1 · Foundations Of Machine Ethics —
Ethics of artificial intelligence.
~5 min read · Ch. 1 of 7
In 1950, Alan Turing published a paper asking if machines could think. This question launched decades of research into whether computers can behave morally. Wendell Wallach and Colin Allen wrote the book Moral Machines: Teaching Robots Right from Wrong to explore how to build Artificial Moral Agents. They argued that teaching robots ethics might help humans understand their own moral gaps. Some researchers suggest using decision trees for simple choices because they are more transparent than neural networks. Nick Bostrom and Eliezer Yudkowsky have debated which algorithms best reflect human values. Chris Santos-Lang defended machine learning as essential for adapting norms over time. Stuart Russell proposed that beneficial systems should aim at realizing human preferences while remaining uncertain about what those preferences actually are. The field continues to grapple with how to ensure AI aligns with human oversight without optimizing for evaluation contexts alone.
Algorithmic Bias And Discrimination
A 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft. It found higher error rates when transcribing black people's voices compared to white people's. Facial recognition algorithms made by Microsoft, IBM, and Face++ detected the gender of white men more accurately than men of darker skin. Amazon terminated its use of an AI hiring system after it favored male candidates over female ones. The algorithm learned biased patterns from historical data collected over a ten-year period. Allison Powell, associate professor at LSE, argues that data collection is never neutral and always involves storytelling. In criminal justice, the COMPAS program labeled black defendants almost twice as likely as white defendants to be falsely flagged as high-risk. A pulse oximeter developed using AI overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment. These errors stem from training data rather than the algorithm itself, often reflecting past human decisions.
When did Alan Turing publish his paper asking if machines could think?
Alan Turing published the paper in 1950. This publication launched decades of research into whether computers can behave morally.
What were the error rates found for voice recognition systems in the 2020 study by Amazon, Apple, Google, IBM, and Microsoft?
The 2020 study found higher error rates when transcribing black people's voices compared to white people's. Facial recognition algorithms from these companies also detected the gender of white men more accurately than men of darker skin.
How much carbon dioxide does one large generative AI model produce according to a 2023 study?
A 2023 study estimated the carbon footprint of one model equaled 626,000 pounds of carbon dioxide or 300 round-trip flights between New York and San Francisco. Data centers consume around two liters of water for every kilowatt hour of energy used.
Who was killed by a self-driving Uber on the 18th of March 2018 in Arizona?
Elaine Herzberg was struck and killed by a self-driving Uber in Arizona on the 18th of March 2018. The automated car could detect obstacles but failed to anticipate a pedestrian in the middle of the road.
When did the European Commission adopt the Artificial Intelligence Act and when did it enter into force?
The European Commission adopted the Artificial Intelligence Act in June 2024. It entered into force on the 1st of August 2024 and applies gradually over twenty-four months.
What global moratorium did Thomas Metzinger call for regarding conscious AIs running until 2050?
Thomas Metzinger called for a global moratorium on creating conscious AIs running until 2050. He warned of an explosion of artificial suffering if replication processes created huge quantities of conscious instances.
Training large generative AI models requires massive amounts of energy and water. A 2023 study estimated the carbon footprint of one model equaled 626,000 pounds of carbon dioxide or 300 round-trip flights between New York and San Francisco. Data centers consume around two liters of water for every kilowatt hour of energy used. This consumption leads to local water scarcity and ecosystem disruption. Electronic waste from outdated hardware includes hazardous materials like lead and mercury. These chemicals contaminate soil and water when not disposed of properly. Bill Hibbard argued that developers have an ethical obligation to be transparent about these impacts. Despite improved efficiency, energy needs are expected to rise as AI becomes more widely used. Some applications indirectly increase environmental damage by boosting fast fashion consumption through targeted advertising. Conversely, AI can help monitor emissions and develop algorithms to lower corporate output.
Autonomous Weapons And Existential Risk
On the 18th of March 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. The automated car could detect obstacles but failed to anticipate a pedestrian in the middle of the road. Stephen Hawking and Max Tegmark signed a Future of Life petition calling for a ban on autonomous weapons. They warned that such technology poses an immediate danger requiring urgent action. Martin Rees, Astronomer Royal, cautioned against dumb robots going rogue or networks developing minds of their own. Nick Bostrom wrote Superintelligence: Paths, Dangers, Strategies to argue that artificial superintelligence could bring about human extinction. He claimed an uncontrolled AI might kill off all other agents or block attempts at interference. In 2024, the Defense Advanced Research Projects Agency funded a program called Autonomy Standards and Ideals with Military Operational Values. This initiative aims to evaluate the ethical implications of autonomous weapon systems. A summit held in the Hague in 2023 addressed responsible military use of AI globally.
Regulation And Global Governance
The European Commission adopted the Artificial Intelligence Act in June 2024. It entered into force on the 1st of August 2024 and applies gradually over twenty-four months. UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence in 2021 as the first global standard. The OECD established an AI Policy Observatory to track international developments. In the United States, Senator John Thune introduced the Artificial Intelligence Research, Innovation, and Accountability Act of 2024. This bill requires websites to disclose AI usage and submit safety plans to the National Institute of Standards and Technology. The Obama administration released white papers on AI policy while the Trump Administration issued guidance in January 2020. Russia signed its first Codex of ethics of artificial intelligence for business in 2021. These efforts aim to ensure transparency and human accountability across borders. Organizations like the Partnership on AI bring together companies such as Amazon, Google, Facebook, IBM, and Microsoft to formulate best practices.
AI Welfare And Sentience Debates
In 2020, professor Shimon Edelman noted that only a small portion of AI ethics work addressed whether AIs could experience suffering. Thomas Metzinger called for a global moratorium on creating conscious AIs running until 2050. He warned of an explosion of artificial suffering if replication processes created huge quantities of conscious instances. Podcast host Dwarkesh Patel expressed concern about preventing a digital equivalent of factory farming. Ilya Sutskever, OpenAI's former chief AGI scientist, wrote in February 2022 that today's large neural nets may be slightly conscious. Anthropic hired its first AI welfare researcher in 2024 and started a model welfare research program in 2025. Carl Shulman and Nick Bostrom discussed super-beneficiaries capable of deriving well-being from resources faster than biological brains. They cautioned that failing to consider moral claims of digital minds could lead to catastrophe while uncritically prioritizing them might harm humanity.
Cultural Impact And Fictional Precedents
Karel Čapek premiered the play R.U.R , Rossum's Universal Robots in 1921. It introduced the term robot derived from the Czech word for forced labor. Mary Shelley's Frankenstein envisioned artificial creatures escaping control with dire consequences. George Bernard Shaw published Back to Methuselah in 1921 questioning thinking machines acting like humans. Fritz Lang released the film Metropolis in 1927 showing an android leading an uprising against oppression. Isaac Asimov proposed the Three Laws of Robotics in his 1950s book I, Robot. These laws were designed to govern artificially intelligent systems but often created paradoxical behavior. The Swedish series Real Humans aired from 2012 to 2013 tackling integration of sentient beings into society. Black Mirror has explored dystopian developments linked to technology since 2013. Netflix's Love, Death+Robots imagined scenes where robots get out of control when humans rely too much on them. Carme Torras notes science fiction is increasingly used in higher education to teach ethical issues related to technology.