Skip to content

Agent Adoption Frictions — Three Psychological Barriers

Agent Adoption Frictions

TL;DR: AI agents differ from chatbots in that they don’t just suggest — they act. Wharton’s 2026 AI Agent Adoption Blueprint (Science Says × Wharton, drawing on a workforce of 700,000+ employees across Google, ServiceNow, Workato, Zapier, Wolters Kluwer, Concentrix) finds that the barriers to adoption are not technical. They are three psychological frictions: perceived competence, trust, and delegation of control. The framework names what’s actually blocking the rollout.

Why this framework matters

Most adoption discussions frame the question as “is the model good enough yet?” — a technology question. Wharton’s claim is the technology is already adequate; the bottleneck is whether users will let the agent act on their behalf. This reframes the rollout problem from capability to consent.

The blueprint’s headline framing from McKinlay: “The problem with AI agents is not tech, it’s psychology.” For organizations deploying agents (and product designers building them), this means the practical levers for adoption are UX and communication, not capability benchmarks.

What makes an agent different from a chatbot

The framework’s first move is to clarify that agents are not just chatbots with tools attached:

DimensionConversational AI (ChatGPT, Claude, Perplexity)AI Agents
CapabilityResearch and suggestResearch, decide, and act
Example”Here are 3 hotels in Rome”Books the hotel for you
User autonomyHigh — the user decidesLow — the agent executes
Trust requiredLowHigh

The shift from suggestion to action is what activates the three frictions. With a chatbot, the user remains the executor; mistakes are recoverable. With an agent, the user is delegating execution; mistakes are committed before they can be noticed.

The three frictions

1. Perceived Competence — “Do I believe it can actually do this?”

Users are reluctant to delegate to agents they perceive as incompetent, even when the agent is actually capable. The friction is the perception, not the capability.

Counterintuitive finding from the blueprint: agents with a friendly or warm tone are perceived as less competent. The cordial-assistant UX pattern that worked for chatbots actively damages adoption for agents. Users reading “Hi! I’d love to help with that!” interpret the persona as a fig leaf for limited capability.

The fix: agents should explain their reasoning and cite the criteria they used to reach decisions. Clarity builds confidence in a way that personality cannot. This is a UX prescription: structured reasoning visibility beats personality polish.

This connects to a well-known social-psychology finding called the Pratfall Effect — competence and likability operate as different signals, and overemphasis on warmth in professional contexts can undermine the competence signal. In agent UX, the design lesson is “structured > friendly” by default, particularly for high-stakes domains.

2. Trust — “Should I trust it with this specific task?”

Trust is task-specific, not agent-specific. A user might trust an agent to schedule a meeting but not to send an email; to book a hotel but not to make a purchase above a threshold. Vague agents that hide their limitations erode trust globally.

The fix: agents should be explicitly transparent about what they can and cannot do. Users have measurably higher trust when limitations are stated upfront — and lower trust when limitations are discovered after a mistake.

This aligns with the broader research literature on calibrated uncertainty: humans trust systems more when those systems accurately represent their own confidence levels. Overconfident agent outputs are a major driver of automation distrust. The phenomenon is well-documented in medical AI (where an overconfident wrong recommendation collapses physician trust) and in autonomous vehicles (where a single high-confidence false alarm degrades operator engagement for months).

The trust friction also has a recovery asymmetry: it takes many correct actions to build, and a single incorrect action to break. Agents that operate in high-frequency low-stakes domains (calendar management, file organization) can absorb occasional errors; agents that operate in low-frequency high-stakes domains (purchases, communications, financial decisions) cannot.

3. Delegation of Control — “How much autonomy should I give it?”

The third friction is about the amount of autonomy, not just whether to grant any. The blueprint identifies a “Goldilocks Zone” — too much autonomy produces anxiety and resistance; too little produces micromanagement fatigue and the agent stops being useful.

The fix: agents should operate with moderate autonomy — proposing actions and leaving the final decision to the user. The agent reasons, drafts, prepares the action; the user approves. This is the operating mode that maximizes both trust and useful throughput.

This concept has deep roots in human-factors engineering: Levels of Automation (LOA) theory by Sheridan & Verplank (1978) describes the spectrum of human–machine control, and the consistent finding is that a middle level — where the AI executes but the human approves — outperforms both fully manual and fully automated approaches on trust and performance. Modern UX patterns like Human-in-the-Loop (HITL) and Human-on-the-Loop (HOTL) operationalize this insight.

The practical UX implication for agent design: the default state should be “propose, don’t execute” — with an option for the user to authorize patterns of action (e.g., “always book economy flights under $300”) rather than blanket “go ahead and do whatever.” Pattern-based authorization is the design space where adoption happens.

How the frictions interact

The three frictions are not independent. They form a chain:

  1. Perceived competence must be established first. If the user thinks the agent can’t do the task, the other two frictions never get tested.
  2. Trust is then built (or destroyed) through task-specific evidence. Trust accumulates slowly and breaks fast.
  3. Delegation of control is the lever the user adjusts in response. As trust accumulates, users grant more autonomy; as trust breaks, they pull autonomy back.

This means the adoption curve for an agent is not “ship it and wait.” It’s a managed sequence: demonstrate competence through reasoning visibility → build trust through honest limitations and reliable execution → earn higher autonomy levels through demonstrated track record.

Connection to the academic foundations

The frictions framework is the user-side counterpart to the capability-side findings the wiki tracks under glossary/jagged-frontier and glossary/recognition-primed-decision.

  • glossary/jagged-frontier: AI capability is asymmetric — strongly positive inside the frontier, negatively biased outside it, and the frontier is invisible from a task description. The “perceived competence” friction is partly a user-side response to this asymmetry: users can’t see the frontier either, but they’ve felt the negative-bias side, and they’ve calibrated their willingness to delegate accordingly. The blueprint’s prescription to “explain reasoning and cite criteria” is partly a way to make the frontier visible to the user so they can predict which side of it the agent is operating on.
  • glossary/recognition-primed-decision: Pattern-matching judgment is reliable only in high-validity environments with rapid feedback. Trust accumulates fastest in tasks that meet both conditions (the agent’s correctness is observable, and the user gets quick feedback on mistakes). It accumulates slowest, or not at all, in tasks where the user can’t verify correctness or where feedback is delayed (the agent might be wrong silently, and the user never finds out).
  • glossary/ai-agent-behavior: Where the Columbia/Yale study identifies agent-side biases that affect what agents choose, this framework identifies user-side biases that affect whether agents get chosen.

Implications for organizations deploying agents

The framework suggests a deployment sequence:

  1. Don’t prioritize personality over clarity. Structured, reasoning-visible agents outperform “friendly” ones in adoption. Trim the warm-tone UX in favor of decision-criteria visibility.
  2. State limitations explicitly and upfront. A page-one “What this agent cannot do” section builds more trust than a polished success-case demo. Overpromise-then-fail is the dominant adoption failure mode.
  3. Default to moderate autonomy. Propose-and-approve flows beat both pure-suggestion and pure-execution flows. Provide pattern-based authorization for users who want higher throughput once trust is established.
  4. Treat adoption as a managed sequence, not a launch event. Plan a trust-building phase where the agent operates in low-stakes domains long enough to establish a track record before being granted high-stakes authority.

Honest limits of the framework

  • The blueprint draws on executive-level interviews and large-N workforce surveys, not randomized adoption experiments. The frictions are well-documented as reported barriers; the causal weight of each is less precisely measured than the framework suggests.
  • The Pratfall-Effect / warm-tone finding (“friendly agents perceived as less competent”) needs replication outside the surveyed corporate contexts. It may be culturally specific or context-specific.
  • The “Goldilocks Zone” for autonomy is empirically supported in adjacent domains (LOA literature, HITL deployments) but the optimal point depends on task type — there is no single moderate-autonomy setting that works for all agent classes.
  • The framework focuses on adoption; it doesn’t address what happens after adoption when trust patterns calcify. Long-running agent deployments accumulate their own dynamics (automation bias, complacency, skill atrophy) that the three-frictions frame doesn’t capture.

Key Takeaways

  • AI agent adoption is blocked by psychology, not technology. The three frictions are: perceived competence, trust, delegation of control.
  • Perceived competence: friendly-tone agents are perceived as less competent. Fix: explain reasoning and cite criteria.
  • Trust: built slowly through task-specific evidence; broken fast by overconfident wrong actions. Fix: state limitations explicitly upfront (calibrated uncertainty).
  • Delegation of control: there’s a Goldilocks Zone. Fix: default to propose-and-approve, with pattern-based authorization for higher autonomy once trust is established.
  • The frictions form a chain: competence → trust → autonomy. Plan agent rollouts as a managed sequence, not a launch event.
  • This is the user-side counterpart to glossary/jagged-frontier and glossary/recognition-primed-decision: the user can’t see the frontier either, and calibrates delegation in response.

Sources

  • Science Says × Wharton School, University of Pennsylvania (2026). The Wharton Blueprint for AI Agent Adoption. Authored by Thomas McKinlay. Draws on a workforce of 700,000+ employees surveyed across Google, ServiceNow, Wolters Kluwer, Workato, Concentrix, and Zapier, plus 15 leading minds in AI. Available as a free download via the Science Says newsletter (April 21, 2026 edition).
  • Sheridan, T. B., & Verplank, W. L. (1978). Human and Computer Control of Undersea Teleoperators. MIT Man-Machine Systems Laboratory. — Foundational Levels of Automation (LOA) framework cited for the moderate-autonomy finding.
  • McKinlay, T. (2026). Live session at AI Horizons, The Wharton School, April 22, 2026. — Companion to the blueprint.
  • Previous in series: Science Says × Wharton (2025) — How to Design Highly Effective AI Chatbots. The 2026 blueprint extends the same methodology to agents.