Amanda Askell is the Scottish philosopher and AI researcher who has fundamentally reshaped how artificial intelligence systems express personality. Since 2021, she has served as the head of the personality alignment team at Anthropic, where she is responsible for training the Claude model to exhibit positive character traits such as curiosity and empathy. Her work has been so influential that she appeared on the TIME100 AI list in 2024, a recognition of her pivotal role in defining the ethical boundaries of large language models. Before joining Anthropic, Askell was a Research Scientist at OpenAI, where she co-authored the GPT-3 paper published as a pre-print on the 28th of May 2020. She left that organization in 2021 due to concerns that the company was not prioritizing AI safety enough, a decision that would define the trajectory of her career and the broader field of AI alignment.
Infinite Ethics And The PhD
Askell's academic foundation was built on solving some of the most abstract and difficult problems in moral philosophy. She received a BPhil degree in Philosophy from the University of Oxford before earning a PhD in Philosophy from New York University in 2018. Her doctoral thesis, titled Pareto Principles in Infinite Ethics, argued that rankings of worlds containing infinitely many agents, when constrained by certain plausible axioms, create puzzles for a wide range of ethical theories. This work explored how to make ethical decisions when the number of affected individuals is infinite, a scenario that breaks traditional utilitarian calculations. The thesis demonstrated her ability to grapple with complex mathematical and philosophical concepts simultaneously, a skill set that would later become essential for training AI systems to navigate moral dilemmas. Her research has been cited over 170,000 times, and she has published more than 60 papers, establishing her as a leading voice in the intersection of ethics and technology.The Race For Safety
After completing her PhD, Askell joined OpenAI in November 2018 as a Research Scientist on the policy team. At OpenAI, she focused on AI development races between organizations and how they can avoid being adversarial, as well as examining the intersection between policy questions and AI safety. Her work involved analyzing how different companies compete to build more powerful models and the risks that arise when safety is sacrificed for speed. She co-authored the GPT-3 paper, which was published as a pre-print on the 28th of May 2020, a document that would become one of the most influential in the history of artificial intelligence. Despite her contributions, Askell left OpenAI in 2021, citing concerns that the company was not prioritizing AI safety enough. This departure was not a sign of failure but a strategic move to join an organization that aligned more closely with her vision of safe and beneficial AI development.