When the Ground Shifts Faster Than People Can Stand

A companion piece to The Human Side of the Machine Shift — exploring why AI disruption hits different minds differently, and how leadership can turn discomfort into team strength.

Xiomara Doerga recently wrote something that needed saying. In The Human Side of the Machine Shift, she laid out what most AI transformation narratives skip entirely: the organizational discomfort. The power inversions. The identity crises. The well-meaning but directionless leadership.

I want to pick up where she left off — not to repeat her argument, but to go one layer deeper. Because the discomfort she describes isn’t uniform. It doesn’t hit everyone the same way, and it doesn’t produce the same reactions. And until leadership understands why people react differently to the same disruption, the playbooks will keep failing.

We’ve Seen This Movie Before

The speed of AI adoption has a disorienting quality to it. But it’s not unprecedented. Other domains have lived through this exact pattern — rapid capability shift, role displacement, identity disruption — and they left us a blueprint we keep ignoring.

The auto industry is mid-way through one right now. Mechanical engineers who spent careers mastering combustion dynamics — the tolerances, the thermodynamics, the feel of a well-tuned drivetrain — are watching their organizations pivot to software-defined vehicles. The expertise didn’t become wrong. It became peripheral. The engineers who knew an engine by its sound are now in meetings about over-the-air update architecture. Some adapted. Many didn’t. Not because they couldn’t — but because no one acknowledged what they were being asked to give up.

Journalism went through it a decade earlier. The seasoned investigative reporter who spent six months on a single piece suddenly worked alongside someone publishing eight posts a day and getting ten times the engagement. Speed looked like competence. Depth looked like inefficiency. Newsrooms that rewarded the fast producers and sidelined the meticulous ones didn’t just lose talent — they lost the very thing that made their output trustworthy.

The DevOps shift did it to infrastructure teams. System administrators who could diagnose a failing server from the pattern of its blinking lights watched their craft dissolve into YAML files and Terraform scripts. “Infrastructure as code” sounds elegant in a conference talk. It sounds like a eulogy when it’s your twenty years of expertise being abstracted away.

The pattern is always the same: a new capability emerges, the people who adopt it fastest gain outsized visibility, and the people whose deep expertise should anchor the transition get quietly marginalized instead.

AI is doing this at double speed.

Two Kinds of Minds in the Same Storm

Here’s what I keep seeing in teams going through this shift. Not everyone — people are more complex than categories — but two broad patterns that keep showing up with enough consistency to be worth naming.

Research in cognitive diversity tells us that people process change, novelty, and uncertainty through genuinely different neurological wiring. This isn’t personality preference. It’s how brains are built. And it produces two recognizable patterns under the pressure of rapid technological change.

The deep systematizer. This person built their value through accumulated, structured expertise. They know the domain not as a collection of facts but as a system — the edge cases, the dependencies, the reasons behind the reasons. They’re the person who says “we tried that in 2019 and here’s why it didn’t work.” Their confidence is rooted in depth. Their identity is inseparable from their mastery.

The rapid adapter. This person thrives on novelty. New tools energize them. They’ll have three AI workflows running before the team has finished discussing whether to adopt one. They move fast, they experiment freely, and they’re often the first to produce visible output with new technology. Their confidence is rooted in velocity.

Both are valuable. Both are incomplete. And under the pressure of rapid AI change, both develop reactions that — left unaddressed — will quietly corrode your team from the inside.

The Negative Patterns

When the deep systematizer feels threatened, they don’t usually say “I’m afraid my expertise is becoming irrelevant.” Instead, it shows up as:

Rigorous skepticism that becomes obstruction. Every AI output gets scrutinized to an impossible standard. Valid concerns about reliability become a wall that nothing can pass through. The skepticism is real — and often technically correct — but the intensity is disproportionate to the actual risk.

Withdrawal from the conversation. They stop engaging with the new tools entirely. Not loudly, not in protest — they just quietly return to doing things the way they’ve always done them. On the surface it looks like passive resistance. Underneath, it’s self-preservation. If I don’t engage, I can’t be shown to be inadequate at this new thing.

Gatekeeping through complexity. Framing every task as too nuanced for AI, too context-dependent, too domain-specific. Sometimes that’s true. But when it’s the response to everything, it’s armor, not analysis.

When the rapid adapter feels empowered, the failure mode is different but equally damaging:

Confusing output with outcomes. Producing more doesn’t mean producing better. The rapid adapter can generate an impressive volume of AI-assisted work while skipping the validation that makes it trustworthy. They ship fast and move on, leaving a trail of technically-functional-but-subtly-wrong outputs behind them.

Dismissing concern as resistance. Anyone who questions the speed or quality gets labeled as “not getting it.” This creates a toxic dynamic where raising legitimate issues becomes socially expensive, so people stop doing it.

Shallow integration masquerading as transformation. Using AI to do the same things slightly faster, rather than rethinking what should be done at all. The appearance of transformation without the substance of it.

The Collision

Here’s where it gets really corrosive. These two patterns don’t just coexist — they amplify each other.

The systematizer sees the adapter shipping unvalidated work and thinks: This proves AI is dangerous and these people are reckless. Their skepticism hardens.

The adapter sees the systematizer blocking every initiative and thinks: This proves the old guard can’t keep up. Their dismissiveness grows.

The manager, caught in the middle, typically does one of two things. They either side with the visible output — rewarding the adapter because at least something is shipping — or they default to vague encouragement that lands differently for each type. “Just experiment with it!” sounds like freedom to the adapter and like chaos to the systematizer.

This is the dynamic Xiomara identified as “directionless support.” But it’s worse than directionless. It’s asymmetric. The same words from the same leader create opposite effects depending on who’s hearing them.

The journalism parallel is instructive here. The newsrooms that survived the digital transition weren’t the ones that let the bloggers run free or the ones that protected the old guard. They were the ones that built new structures — editorial workflows that paired speed with verification, roles that valued both reach and rigor, metrics that measured impact rather than just volume.

The auto industry is learning the same lesson. The companies navigating the electric transition best aren’t sidelining their mechanical engineers — they’re pairing them with software teams, because it turns out that knowing how a vehicle behaves at the physical level matters enormously when you’re writing the software that controls it.

Turning It Around

The reactions I’ve described aren’t character flaws. They’re predictable human responses to perceived threat, and they contain the seed of exactly what your team needs. The work of leadership is to redirect them.

For the deep systematizer — redirect the skepticism into evaluation

Their instinct to scrutinize AI output isn’t the problem. It’s the scope that’s wrong. Instead of letting them be the gatekeeper who blocks adoption, make them the evaluator who defines what “good enough” means.

Concretely: put them in charge of building evaluation criteria. What does a reliable AI-generated output look like in your domain? What are the failure modes? What needs human review and what doesn’t? This channels the same critical thinking from “why this won’t work” into “how we’ll know when it does.”

This is what the best DevOps transitions did. The sysadmins who knew every failure mode became the ones writing the monitoring. Their deep knowledge didn’t become irrelevant — it became the safety net under the new abstraction layer.

For the rapid adapter — redirect the velocity into discovery

Their instinct to move fast and try everything is genuinely valuable — at the right stage. Instead of letting speed become the measure of contribution, redirect it into structured exploration.

Concretely: make them the scout. Task them with evaluating new tools, building proof-of-concepts, mapping what’s possible. But — and this is critical — pair their exploration with the systematizer’s evaluation criteria. The adapter finds the possibilities. The systematizer stress-tests them. Neither role works without the other.

For the manager — name the dynamic, don’t manage around it

The single most effective thing a leader can do is make this pattern visible. Not in clinical terms, not as a diagnosis, but as a team reality:

“We have people on this team whose strength is deep domain knowledge and rigorous analysis, and people whose strength is rapid experimentation and adaptation. Both of these are assets. But under the pressure of this transition, they can work against each other if we’re not deliberate about how we combine them.”

Then build structures that make the combination explicit:

  • Pair them intentionally. Exploration sprints where adapters generate possibilities and systematizers evaluate them. Not sequentially — together.
  • Reward different things. Publicly recognize both the person who found a new capability and the person who identified where it fails. Make both visible.
  • Define “good speed.” Not the fastest possible, not the most cautious possible, but the fastest pace at which your team can produce trustworthy output. That number is a negotiation between depth and velocity. Make the negotiation explicit rather than letting it play out as interpersonal conflict.
  • Update the language. If your team’s vocabulary still frames skepticism as resistance and speed as innovation, the words themselves are creating the problem. Skepticism in the right structure is quality assurance. Speed in the right structure is discovery. Name them accordingly.

The Real Work

Every industry that has navigated rapid technological disruption successfully has eventually arrived at the same conclusion: the technology is the easy part.

The hard part is creating an environment where people with fundamentally different relationships to change can do their best work together. Where depth isn’t threatened by speed. Where speed isn’t undermined by depth. Where leadership is specific enough to be useful and honest enough to be trusted.

Xiomara was right — organizational readiness is what’s missing. But organizational readiness isn’t a state you achieve. It’s a practice. And it starts with recognizing that the discomfort in your team isn’t a bug in the transition. It’s information about what your people need from you.

The question isn’t whether your team can adapt to AI. They can. The question is whether you can build a structure that lets all of them adapt in the way their minds actually work — rather than demanding they all adapt the same way.

The ground is shifting fast. The least you can do is help people find their footing.


This piece builds on ideas from Xiomara Doerga’s The Human Side of the Machine Shift. If you’re leading a team through AI adoption and recognizing these patterns, start with her piece for the structural framework, then come back here for the human dynamics underneath.

All posts