.png)
Your coworkers are fighting again.
It’s like a scene from The Office: people tattling, process violations being alleged, someone insisting that they are following the checklist, and someone else insisting that the checklist is wrong. This sort of complaint would usually go to Toby in HR, but in this case the CIO is being called. Why?
One of the coworkers is an LLM-based agent. It flags its human counterpart’s work as sloppy, insists that steps were skipped, and highlights grammar issues and tone problems with confidence. And uncomfortably, it seems to flag certain people more than others, particularly non-native English speakers. Now the argument is about fairness, rather than correctness.
So who adjudicates the fight? Do you give your LLM unconscious bias training? Do you override it and risk undermining the system entirely? And what happens when the agent is right and the human really didn’t follow the process? This is the moment that most organizations haven’t prepared for.
For decades, CIOs have lived in a largely deterministic, binary world where systems either worked or didn’t. Logs were auditable, rules were explicit, and outputs could be confidently traced back to inputs. Many gravitated to the world of technology because it felt predictable and solid.
Now, the resources embedded in core workflows are probabilistic. They are black box neural nets guided by reward functions rather than rules, refined through feedback rather than reprogramming. You can influence them, but you can’t reliably explain, audit, or reproduce why they behaved the way they did.
This isn’t another infrastructure shift like the transition from on-prem to the cloud, which changed where work happened. This is a fundamental re-imagining of how work is executed and governed.
When AI agents start participating alongside humans, the CIO inherits a new role: steward of a mixed workforce, half human and half non-human, operating under fundamentally different assumptions.
The old playbook breaks down and a series of uncomfortable problems are likely to manifest. Processes that were once informal suddenly matter a great deal, ambiguity becomes dangerous, and undocumented assumptions stop working. If humans and agents don’t share a precise definition of success, disputes can’t be resolved clearly.
Here are some suggestions for navigating this new reality:
The job description for the CIO of the future is going to look very different from the past. Keeping the infrastructure up will always be critical, but doing so in a world of probabilistic systems requires clearer process definition, stronger auditability, and far more transparency than most organizations operate with today.