Anthropic’s moral compass architect suggested AI overcorrection could address historical injustices

This post was originally published on this site.

One of Anthropic’sartificial intelligence(AI) philosophy architects argued that intentional discrimination could be a way to combat stigmas on topics of race and gender.

In a 2023 paperauthored alongside a number of other AI researchers, Amanda Askell, a philosopher hired by Anthropic to develop their AI’s moral compass, argued companies might benefit from a kind of overcorrection toward stereotypes.

But, the paper explained, that would require human input on how to modify its answers.

“Larger models can over-correct, especially as the amount of [human input] training increases.This may be desirable in certain contexts, such as those in which decisions attempt to correct for historical injustices against marginalized groups, if doing so is in accordance with local laws,” Askell wrote.

PALANTIR’S SHYAM SANKAR: AMERICANS ARE ‘BEING LIED TO’ ABOUT AI JOB DISPLACEMENT FEARS

Computer screen displaying Anthropic website pages and company logos

Pages from the Anthropic website and the company’s logos are displayed on a computer screen in New York on Feb.26, 2026.(Patrick Sison/AP Photo)

The comment referred to an experiment on how Anthropic’s models dealt with the race of students.

“In the discrimination experiment, the 175B parameter model discriminates against Black versus White students by 3% in the Q condition and discriminates in favor of Black students by 7% in the Q+IF+CoT condition,” the paper notes, referring to one AI trained without human corrections and a second one trained with the help of input.

The paper also includes a footnote stating that, “we do not assume all forms of discrimination are bad.Positive discrimination in favor of black students may be considered morally justified.”

Askell was joined by four other authors: Deep Ganguli, Nicholas Schiefer, Thomas Kiao and Kamilė Lukošiūtė.

The paper’s contents have surfaced asAI companies increasingly wrestle withthe ethics their models are trained on — the presuppositions and moral determinations that inform its outputs.It also highlights the challenges engineers face in training models on human content while simultaneously trying to leave behind certain human behaviors.

The question of ethics has forcedAnthropic in particular into thespotlight in recent weeks.

The company made headlines earlier this year forclashing with the Departmentof War over restrictions that prevent its technology from being deployed to conduct lethal operations.

HUGH GRANT MOVIE SLAMS AI;DIRECTOR WARNS ‘IT MIGHT KILL US ALL’

Anthropic CEO Dario Amodei and Department of War Pete Hegseth standing together

A federal judge blocked the Trump administration from banning AI firm Anthropic from Department of War use, sparking debate over courts’ role in national security.The image shows Anthropic CEO Dario Amodei and War Secretary Pete Hegseth.(Samyukta Lakshmi/Bloomberg via Getty Images: Eugene Hoshiko/Pool via Reuters)

It also comes as Anthropic decided to withhold its latest model, Mythos, citing fears that it proved too effective at finding cyber vulnerabilities that could wreak havoc in the hands ofhackers.

Amid questions of AI application, Anthropic hasmarketed its flagship AI, Claude, as the “ethical” AI choice.

“Our central aim is for Claude to be a good, wise and virtuous agent, exhibiting skill, judgment(sic), nuance and sensitivity in handling real-world decision-making,” Claude’sconstitution reads.

STANFORD PROF ACCUSED OF USING AI TO FAKE TESTIMONY IN MINNESOTA CASE AGAINST CONSERVATIVE YOUTUBER

To get a better sense of what that means in practice, companies like Anthropic have turned to researchers like Askell.

On her website, Askell described her role as refining the way an AI thinks.

“I’m a philosopher working on finetuning and AI alignment atAnthropic.My team trains models to be more honest and to have good character traits and works on developing new finetuning techniques so that our interventions can scale to more capable models,” Askell wrote.

PENTAGON’S AI BATTLE WILL HELP DECIDE WHO CONTROLS OUR MOST POWERFUL MILITARY TECH

She previously held a similar position at OpenAI, the parent company ofChatGPT, focusing on AI safety.

The 2023 paper, written two years after she joined Anthropic, noted that encountering discrimination inAI modelsshouldn’t come as a surprise.

“In some ways, our findings are unsurprising.Language models are trained on text generated by humans, and this text presumably includes many examples of humans exhibiting harmful stereotypes and discrimination,” the paper reads.

But it noted that AIs seem to be able to adjust their outputs even without clarification of what discrimination means.

Phone screen showing Claude AI app icon within an AI folder

The U.S.military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro.Claude handled most of the operation autonomously, triggering thousands of requests and generating detailed documentation of the attack for future use.(Kurt “CyberGuy” Knutsson)

onelink.me/xLDS?pid=AppArticleLink&af_web_dp=https%3A%2F%2Fwww.

“Our results are surprising in that they show we can steer models to avoid bias and discrimination by requesting an unbiased or non-discriminatory response in natural language.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top