AI as an attorney? Student uses ChatGPT, Gemini to sue UW over alleged racial discrimination

AI as an attorney? Student uses ChatGPT, Gemini to sue UW over alleged racial discrimination

AI Tools & Products 5 min read

About this article

A California man is using AI as a team of lawyers, claiming colleges who rejected his son's application, including the University of Washington, did so based on racial discrimination.

News & Stories Education AI as an attorney? Student uses ChatGPT, Gemini to sue UW over alleged racial discrimination Monica Nickelsburg April 15, 2026 / 2:31 pm Stanley Zhong took a job as an AI engineer at Google after 16 out of the 18 colleges he applied to rejected him. Courtesy of Nan Zhong Stanley Zhong graduated from his Bay Area high school with a 4.42 GPA, a 1590 SAT score, and high rankings in several international coding competitions, according to his father. Nan Zhong liked his son’s chances for the 18 colleges he applied to, including several University of California schools and the University of Washington.“ With all the credentials, everybody thought it's going be a strong case,” Nan Zhong said. “But in a fairly disappointing way, he was rejected by almost all of them. He was rejected by 16 out of the 18 programs.”His son decided not to go to college when Google offered him an AI engineering job that typically requires a doctorate.“ The contrast is a little bit hard to comprehend,” Zhong said. “You have on one hand saying, this guy's as good as somebody with a Ph.D. degree... and on the other hand, there are all these colleges saying he's not qualified enough for undergrad admission.”RELATED: Some colleges scrap diversity questions from admissions essays. Will it change how students talk about themselves? Sponsored After speaking with other Asian American families with similar experiences, Zhong concluded discrimination was at work. He tried to enlist a law ...

Originally published on April 16, 2026. Curated by AI News.

Related Articles

[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits
Llms

[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

Abstract page for arXiv paper 2604.01473: SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

arXiv - AI · 4 min ·
[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots
Llms

[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots

Abstract page for arXiv paper 2603.23682: Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for ...

arXiv - AI · 4 min ·
[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
Llms

[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

Abstract page for arXiv paper 2601.07422: Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

arXiv - AI · 4 min ·
[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images
Llms

[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

Abstract page for arXiv paper 2603.08486: Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

arXiv - AI · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime