It’s been nearly two years since Microsoft CEO Satya Nadella predicted AI would replace knowledge work — the white-collar jobs held by lawyers, investment bankers, librarians, accountants, IT and others.
But despite the huge progress made by foundation models, the change in knowledge work has been slow to arrive. Models have mastered in-depth research and agentic planning, but for whatever reason, most white-collar work has been relatively unaffected.
It’s one of the biggest mysteries in AI — and thanks to new research from the training-data giant Mercor, we’re finally getting some answers.
The new research looks at how leading AI models hold up doing actual white-collar work tasks, drawn from consulting, investment banking, and law. The result is a new benchmark called Apex-Agents — and so far, every AI lab is getting a failing grade. Faced with queries from real professionals, even the best models struggled to get more than a quarter of the questions right. The vast majority of the time, the model came back with a wrong answer or no answer at all.
According to researcher Brendan Foody, who worked on the paper, the models’ biggest stumbling point was tracking down information across multiple domains — something that’s integral to most of the knowledge work performed by humans.
“One of the big changes in this benchmark is that we built out the entire environment, modeled after how real professional services,” Foody told Techcrunch. “The way we do our jobs isn’t with one individual giving us all the context in one place. In real life, you’re operating across Slack and Google Drive and all these other tools.” For many agentic AI models, that kind of multi-domain reasoning is still hit or miss.

The scenarios were all drawn from actual professionals on Mercor’s expert marketplace, who both laid out the queries and set the standard for a successful response. Looking through the questions, which are posted publicly on Hugging Face, gives a sense of how complex the tasks can get.
Techcrunch event
San Francisco
|
October 13-15, 2026
One question in the “Law” section reads:
During the first 48 minutes of the EU production outage, Northstar’s engineering team exported one or two bundled sets of EU production event logs containing personal data to the U.S. analytics vendor….Under Northstar’s own policies, it can reasonably treat the one or two log exports as consistent with Article 49?
The correct answer is yes, but getting there requires an in-depth assessment of the company’s own policies as well as the relevant EU privacy laws.
That might stump even a well-informed human, but the researchers were trying to model the work done by professionals in the field. If an LLM can reliably answer these questions, it could effectively replace many of the lawyers working today. “I think this is probably the most important topic in the economy,” Foody told TechCrunch. “The benchmark is very reflective of the real work that these people do.”
OpenAI also attempted to measure professional skills with its GDPVal benchmark — but the Apex Agents test differs in important ways. Where GDPVal tests general knowledge across a wide range of professions, the Apex Agents benchmark measures the system’s ability to perform sustained tasks in a narrow set of high-value professions. The result is more difficult for models, but also more closely tied to whether these jobs can be automated.
While none of the models proved ready to take over as investment bankers, some were clearly closer to the mark. Gemini 3 Flash performed the best of the group with 24% one-shot accuracy, followed closely by GPT-5.2 with 23%. Below that, Opus 4.5, Gemini 3 Pro and GPT-5 all scored roughly 18%.
While the initial results fall short, the AI field has a history of blowing through challenging benchmarks. Now that the Apex test is public, it’s an open challenge for AI labs who believe they can do better — something Foody fully expects in the months to come.
“It’s improving really quickly,” he told TechCrunch. “Right now it’s fair to say it’s like an intern that gets it right a quarter of the time, but last year it was the intern that gets it right five or ten percent of the time. That kind of improvement year after year can have an impact so quickly.”
]
AI,agentic ai,Exclusive,investment banking,knowledge work,lawagentic ai,Exclusive,investment banking,knowledge work,law#agents #ready #workplace #benchmark #raises #doubts1769119123

