Workday is having a rough month. The HR software giant just confirmed hackers breached a third-party database, though they claim no customer data was involved. More damaging is the federal lawsuit that's about to expose every company using their AI hiring tool.
Derek Mobley applied to over 100 jobs through Workday portals. Black, over 40, living with anxiety and depression. Rejected repeatedly—sometimes within minutes, sometimes at 2 a.m. when no human was awake to review applications.
The culprit? HiredScore, an AI tool that ranks and filters candidates before recruiters see them. Mobley sued for discrimination. The court said the case can proceed and expanded it to cover thousands of other applicants processed by the same algorithm.
Now a federal judge has ordered Workday to hand over every employer that used HiredScore. That list is about to become a litigation roadmap.
Looking at this in a global context, because if this can of worms open in the US, it will spread. The EU AI Act, which becomes fully applicable by August 2026, classifies AI hiring systems as "high-risk" and mandates similar bias mitigation, human oversight, and accountability measures. While the EU distributes responsibility more evenly between vendors and employers, the core message is you can't hide behind your software vendor anymore.
Companies betting they're shielded because they don't control the algorithm are wrong. The court doesn't care if you outsourced the bias—you're still liable for discriminatory hiring. California's rules kick in October 1, 2025, requiring bias audits and human oversight. The EU's framework demands "meaningful human oversight" and continuous risk management, with fines up to €35 million or 7% of global turnover.
For companies now potentially exposed to lawsuits they didn't sign up for when they inked that SLA with Workday, this is a genuinely awkward position. Being seen as a point on the litigation roadmap is a minefield of brand damage, potential C-level changes, and the very real risk of stock prices heading south.
Map every AI system touching your hiring process. Most companies can't even list what they're using, let alone audit it. The "we didn't know" defence won't work when the pattern is obvious—rejections within minutes, middle-of-the-night automated decisions.
Test your tools for bias against protected groups. Document everything. The EU requires continuous monitoring throughout the AI system's lifecycle, including testing for impacts on vulnerable groups. California wants bias audits. Both want paper trails.
Establish real human oversight. Not rubber-stamping AI recommendations, but understanding how these systems make decisions and when to override them. The EU explicitly prohibits fully automated hiring decisions and requires that AI assists rather than replaces human judgement.
We've sleepwalked into a world where algorithms screen out qualified people before any human sees their application. The Workday case isn't about stopping innovation—it's about accountability. Companies that audit their AI, document their processes, and maintain human control will benefit from these technologies. Those that don't are about to see expensive days in court ahead.
Time to find out what your AI is actually doing—before the lawyers do.