Questions about Fairness (machine learning)

Short answers, pulled from the story.

When did the U.S. Civil Rights Act pass into law and how did it impact algorithmic fairness research?

The U.S. Civil Rights Act passed into law in 1964, sparking intense debate within the scientific community about measuring fairness and bias in decision-making processes. Researchers spent the next decade trying to define what it meant for an algorithm or human judge to be fair before these discussions largely vanished from mainstream academic discourse by the end of the 1970s.

What event triggered the resurgence of algorithmic fairness debates after 2016?

A sharp resurgence occurred only after 2016 when ProPublica released a report claiming that COMPAS software used in US courts was racially biased. This report reignited decades-old questions about whether automated systems could ever truly be impartial and led tech companies to release tools like IBM's Python libraries and Google's guidelines to detect and reduce bias.

How does English-centric data affect large language model responses regarding political ideologies?

English-centric data causes systematic deviation in sampling information where current large language models present Anglo-American views as truth while downplaying non-English perspectives as irrelevant or noise during training processes. Luo et al. demonstrated that this approach distorts true coverage of topics and views available in repositories, causing other political perspectives embedded in different languages to disappear entirely from generated text.

Which specific years saw major AI discrimination incidents involving image recognition and credit algorithms?

Google apologized in 2015 after Photos mistakenly labeled a black couple as gorillas, and an international beauty contest judged by AI in 2016 favored individuals with lighter skin due to biased training data. Three commercial gender classification algorithms tested in 2018 proved most accurate for light-skinned males but worst for dark-skinned females, while Twitter's image cropping tool released in 2020 preferred lighter skinned faces over darker ones.

What are the three main criteria required for total fairness in machine learning classifiers?

Independence requires sensitive characteristics to be statistically independent of predictions made by classifiers so that classification rates must equal across different groups regarding sensitive attributes. Separation demands sensitivity independence given the target value itself where true positive and false positive rates become equal for every value of sensitive characteristics under this definition. Sufficiency states probability of being in each group equals for two individuals with different sensitive characteristics if predicted same group.