When Machines Become the Primary User
The most important user of your software may no longer be a person. When agents become your primary operators, everything changes: API design, permissions, UX, and what it means for a product to be usable.
Software has always been designed around a single assumption: a human is operating it.
Every design decision flows from that premise. Dashboards exist because humans need visual representations of data. Navigation menus exist because humans need to find things. Forms exist because humans need to input structured data. Permission models assume an authenticated person making requests. Error messages are written in English. Loading spinners exist because humans get anxious when nothing appears to happen.
This assumption is breaking.
In AI-native systems, the most frequent and important "user" is increasingly a non-human agent. An LLM-powered system that issues commands, calls APIs, reads and writes records, processes responses, and coordinates with humans only when it hits an approval gate or an edge case it can't resolve.
This isn't a hypothetical future. OpenAI's Agents SDK is designed for applications where a model uses tools, hands off tasks to specialised agents, streams partial results, and maintains a trace of everything it did. Anthropic's tool-use model formalises the separation between client-side and server-side tool execution. Microsoft's Copilot Studio connects agents to enterprise data sources and external services through callable connectors.
The infrastructure exists. The patterns are documented. The shift is already happening in production systems. The question for product teams is whether their architecture is ready for it.
What changes when the user is an agent
When a human uses your software, they bring enormous contextual intelligence to the interaction. They can interpret ambiguous labels. They can navigate inconsistent UIs. They can figure out workarounds when things break. They can look at a dashboard and intuit what matters.
Agents have none of this. They are precise, literal, and fast, but they need explicit, structured interfaces to act safely. This changes nearly every layer of how software is designed.
APIs become tool surfaces
Human developers read API documentation, understand context, handle edge cases, and write code that calls endpoints. When an agent calls an API, it needs something different: a structured tool schema that describes exactly what the tool does, what inputs it accepts, what outputs it returns, and what side effects it causes.
OpenAI's function calling pattern makes this concrete. The model receives a set of tool definitions, each with a name, description, and JSON schema for parameters. The model decides which tool to call, constructs the arguments, and returns them as structured output. Your system executes the tool and feeds the result back. The model then either calls another tool or completes its response.
This is fundamentally different from a REST API designed for developer consumption. The tool description needs to be unambiguous enough for a model to select correctly from a set of options. The parameters need strict schemas, not loose conventions. The response format needs to be structured and parseable, not a blob of HTML or a status page designed for browser rendering.
Companies that redesign their API surfaces as agent-facing tool catalogues gain a structural advantage: their product becomes a natural integration target in agent workflows. Companies that don't will find agents working around them, using computer-use capabilities to operate their human-facing UI, which is slower, more brittle, and harder to govern.
Permissions need a new model
Traditional permission systems assume an identity model based on human users: a person authenticates, receives a session or token, and that identity carries permissions through every request.
When agents act on behalf of users, or act autonomously within defined boundaries, the permission model needs to expand. Questions that didn't previously exist become critical:
- What actions can this agent take without human approval?
- Does the agent inherit the full permissions of the user it represents, or a scoped subset?
- How do you audit actions taken by an agent versus actions taken directly by a human?
- When an agent chains multiple tool calls, does each call need separate authorisation, or does the initial delegation carry through?
OWASP's AI Agent Security guidance identifies "excessive agency" as a primary risk: agents with more permissions than they need, executing actions that weren't intended, without adequate oversight. The mitigation isn't just better prompt engineering. It's architectural. Least-privilege design for agent actors, explicit approval gates for high-impact actions, and audit trails that distinguish human from agent activity.
This is a systems design challenge, not an AI challenge. And most products haven't started thinking about it.
UX becomes supervision, not operation
When the primary user is an agent, the human's role shifts from operator to supervisor. Instead of clicking through workflows step by step, the human reviews plans, approves actions, monitors progress, and intervenes when something goes wrong.
This changes what "good UX" means. The interface is no longer optimised for efficient direct manipulation. It's optimised for oversight:
- Plan previews showing what the agent intends to do before it does it, so the human can approve, modify, or reject.
- Progress transparency streaming intermediate results and status updates, because agent workflows can run for minutes or hours.
- Confidence signals indicating how certain the agent is about its decisions, so humans know when to trust and when to inspect.
- Audit trails providing comprehensive logs of what the agent did, what tools it called, what data it accessed, and what decisions it made, so the human can reconstruct and verify after the fact.
- Graceful escalation with clear pathways for the agent to hand back to a human when it encounters uncertainty or risk beyond its capability.
Microsoft's Copilot Actions and Anthropic's computer-use capabilities both push UX design toward this supervision model. The product interface becomes a control panel for autonomous execution, approvals, confirmations, and audit, rather than a screen where humans do each step manually.
Discovery flips from visual to semantic
In a traditional product, users discover capabilities by exploring the interface: browsing menus, clicking through screens, reading labels. The product's discoverability depends on visual design and information architecture.
Agents don't browse. They discover capabilities through tool schemas, descriptions, and metadata. A tool's discoverability depends on how clearly its schema describes what it does, when to use it, and what it expects.
This creates a new design discipline: writing tool descriptions that are precise enough for a model to select correctly from a potentially large set. Both OpenAI and Anthropic have introduced tool search mechanisms, deferred loading that lets agents discover relevant tools at runtime rather than receiving the full catalogue upfront. This implies that tool metadata (names, descriptions, parameter schemas) becomes as important as the implementation behind it.
Think of it like the difference between a store with well-organised shelves (designed for human browsers) and a store with a precise, structured inventory system (designed for a logistics bot). Both need to be good, but they're good at different things.
Error handling becomes machine-readable
When a human encounters an error, they read a message, interpret the context, and figure out what to do. Error pages can be vague ("Something went wrong") because humans bring enough context to recover.
Agents need structured error responses with machine-readable codes, specific descriptions of what failed, and actionable guidance on what to do differently. A 500 error with a stack trace in HTML is useless to a model. A structured JSON response with an error code, a description, and a suggested retry strategy is actionable.
This sounds like a minor point, but it compounds across every interaction. Systems designed for human error handling silently degrade when agents use them. The agent either retries blindly, gives up, or hallucinates a workaround. None of these are good outcomes.
The platform implications
The shift from human-first to agent-first design has compounding platform effects.
Integration becomes the default mode. When agents can discover and call tools autonomously, your product's value increasingly depends on how well it works as a component in someone else's workflow. A product that only works through its own dashboard is an island. A product with well-designed tool surfaces is a node in an expanding agent ecosystem, a pattern we explore in depth in our Plugin Architecture and Plugin Ecosystem Growth playbooks.
Stickiness shifts from UI to tool surface. Traditional SaaS retains users through interface familiarity, data gravity, and workflow lock-in. When agents mediate the interaction, UI familiarity matters less. What matters is whether your tool surface is reliable, well-described, and easy for agents to use. The moat moves from "users know how to use our dashboard" to "agents can reliably operate through our tools."
Standards become strategic. MCP (the Model Context Protocol) is positioned as an open standard for connecting agents to external systems hosting tools and data. Products that adopt interoperability standards early will be easier for agents to integrate with. Products that require proprietary connectors or bespoke integration work will be less accessible as agent ecosystems scale.
What to do about it
This shift doesn't require you to abandon human users. Most products will serve both human and agent users for a long time. But designing exclusively for humans and treating agent access as an afterthought is a losing strategy.
Audit your API surface through an agent lens. Look at every endpoint and ask: could a model understand when to use this, construct the right arguments, and interpret the response? If the answer requires reading human documentation, watching a tutorial, or understanding implicit conventions, the tool surface needs work.
Design tool schemas as first-class artefacts. Give each tool a clear name, a precise description, explicit parameter schemas with types and constraints, and structured output formats. These aren't documentation supplements. They're the primary interface for your fastest-growing user base.
Build a permission model for non-human actors. Define scoped roles for agents, explicit approval gates for high-impact actions, and audit trails that distinguish agent from human activity. Don't assume the human permission model maps cleanly.
Instrument for agent usage. Track how agents use your tools. Monitor which tools are called, how often, in what sequences, and where failures occur. This data is as important as your human analytics dashboard, arguably more so, because agent usage patterns will reveal integration opportunities and reliability gaps you'd never see from human usage alone.
The most important user of your software might not be a person anymore. The products that recognise this early and design for it deliberately will have a structural advantage that compounds as agent ecosystems grow. For a deeper look at how to structure your product architecture for this shift, see our piece on the AI-native software stack.
The ones that keep building for human eyes only will discover, gradually and then suddenly, that the world moved on.