Skip to main content

From Blind Clicks to Calculated Risks: Why "Trust" in AI is an Illusion

We’ve been looking at the user experience of AI agency all wrong.

For years, software design has relied on the concept of Trust. We ask users to trust the application, trust the permissions, and trust the automation. But as agentic workflows—like those in Cursor or Docker—become part of our daily stack, "trust" is starting to feel like an outdated, almost sentimental term.

In reality, what we are seeing is a fundamental reframing: Trust is being replaced by an Appetite for Risk.


The "Cookie Banner" Trap

We’ve all been there. You’re deep in a flow state, and a pop-up appears asking for permission to execute a command. At first, you read it. By the tenth time, you’ve developed banner blindness.

Much like the ubiquitous cookie banners or mobile OS permissions, these prompts often transition from "security features" to "workflow nuisances." When an agent asks for permission for the 50th time in an hour, we don't click "Allow" because we trust the system; we click it because we want the friction to disappear.

Agency via Appetite

The shift happens when platforms recognize this fatigue. Take Docker’s recent approaches to agentic instructions:

  • The "Allow in Future" bypass: You can whitelist certain behaviors, effectively saying, "I accept the risk of this specific category of action to gain velocity."

  • The Friction Pivot: When a truly high-risk operation appears, the UI doesn't just send another standard prompt. It "doubles down," changes the visual language, warns of the specific consequence, and forces you to slow down.

This isn't about building a warm, fuzzy feeling of trust. It’s about calibrating risk. The system is essentially asking: "How much damage are you willing to let me do in exchange for this productivity boost?"


Understanding vs. Blind Faith

Fundamentally, humans struggle to trust what they don't understand. In the world of Large Language Models and autonomous agents, where "hallucinations" are a feature and not just a bug, trust is broken almost as often as it is built.

If we keep trying to build "trusting" relationships between users and agents, we will keep failing. Instead, we should be designing for Risk Management:

  1. Granularity: Don't ask for "Full Access." Ask for permission to "Edit Files" vs. "Execute Terminal Commands."

  2. Translucency: If a user can see why an agent wants to take a risky step, their "appetite" for that risk becomes an informed decision rather than a shot in the dark.

  3. Velocity Control: Allow users to "speed up" in safe zones and "slow down" in danger zones.

The Bottom Line

We need to stop treating AI agency like a friendship and start treating it like a high-stakes investment. We shouldn't be asking our users "Do you trust this AI?"

We should be asking: "What is your appetite for risk today?"


Do you think the "Trust" model is actually dead, or are we just finding better ways to quantify it?


End.
Next entry

What I've learned vibe coding (recently)

This article was written with the help of Gemini AI
Original prompt:
turn this into a blog post: Some musings of mine: Of late, and I've started seeing examples in Cursor and Docker, that Trust is reframed as Appetite for Risk. I think perhaps we haven't been looking at like a user would. The examples from their platforms are nice because they are adding in layers of agency into the system. For example in Docker most agentic instructions you can 'allow in future' ie it won't stop to prompt you. In these enviornments you develop a banner blindness and start accepting them all, they really just become a nuisance just like cookie banners, just like apple and android permissions. But when the most risky options appear for approval Docker doubles down, warns you of the specific risk and asks you to slow down. Fundamentally people don't trust what they can't understand, at least at the start. It takes a long time to build trust and when it started (and is) in a place where it regularly gets broken we should perhaps think about how we broach the agency of an agent in terms of risk over trust.

Random Quote

Assumption is the mother of all fuck ups

Marcus Penn, a bad guy from Under Siege 2

Good Reads

Evil by design