---
title: "Agents Can Either Be Useful or Secure
"
description: "AI agents force a fundamental choice. That’s a false choice. Identity is the building block to refuse it."
authors:
  - name: "Abhishek Hingnikar"
    url: "https://auth0.com/blog/authors/abhishek-hingnikar/"
  - name: "Kundan Kolhe"
    url: "https://auth0.com/blog/authors/kundan-kolhe/"
date: "Feb 11, 2026"
category: "AI"
tags: ["ai", "security"]
url: "https://auth0.com/blog/agents-can-be-useful-or-secure/"
---

# Agents Can Either Be Useful or Secure


<style>
    
  /* Increases spacing between bullet points */   
    li {padding-bottom: .7em; }

  /* Hides Disqus module */
  #disqus_thread {display: none;}

</style>
[OpenClaw](https://github.com/openclaw/openclaw) (formerly Moltbot, formerly ClawdBot) has amassed over [150,000 GitHub stars](https://github.com/openclaw/openclaw). The premise: a personal AI assistant with access to your full digital life: Signal, WhatsApp, iMessage, email, browser, everything. People bought dedicated Mac Minis just to run it.

The takeaway is clear, the most capable agent reaches across your entire technology stack, pulls data from any system, takes action in any application, executes without waiting for human approval. That's also exactly what attackers want.

The features that make AI agents useful are the same features that make them dangerous. This is not a bug to patch; it's missing a foundation.

## Agents Excel by Breaking Down Silos

Modern workplaces are dense networks of employees that leverage software and services. Enterprises deploy over [100 apps on average](https://www.okta.com/reports/businesses-at-work/), and many organizations complement these SaaS applications with a suite of internal applications and APIs.

In our work with enterprise customers, organizations frequently report managing thousands of internal APIs and applications across the enterprise IT sprawl. Each app is a silo with its own interface, data model, and learning curve.

<iframe width="560" height="315" src="https://www.youtube.com/embed/KPH8GRahU70?si=-wNnmdioGTocBWSD&autoplay=1&mute=1&loop=1&playlist=KPH8GRahU70" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

For instance, in their day-to-day work, an account manager may review or update the CRM, make changes to a project tracker, combine data from three spreadsheets, cross-reference calendars, and draft follow-ups. Every task means switching context. That's friction.

<iframe width="560" height="315" src="https://www.youtube.com/embed/zcCNK0xowms?si=Giu1i2_eKDRgmjCU&autoplay=1&mute=1&loop=1&playlist=zcCNK0xowms" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Agents change this. They can jump between Salesforce, Zendesk, Gmail, and Google Calendar, updating the CRM, drafting the follow-up, and scheduling the next meeting in a single workflow. The boundaries that slow humans down are invisible to software.

<iframe width="560" height="315" src="https://www.youtube.com/embed/tF04NtMlEh0?si=9NIO2IRk1O-FtE0_&autoplay=1&mute=1&loop=1&playlist=tF04NtMlEh0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

**This is why agents are valuable. And exactly why they're dangerous.**

When your sales data lives in Salesforce and payroll lives in Workday, a compromised sales credential can't touch payroll. The service boundary is the protection. An agent with access to both removes the boundary.

## Agents Can Be Riskier Than Employees

Agents inherit the vulnerabilities of both humans and software.

**From humans**, agents inherit broad discretionary access. They make judgment calls, operate across systems, and take actions based on context.

**From software**, agents inherit deterministic behavior. Find a working exploit, and it works every time. Unlike phishing a human (probabilistic, labor-intensive), attacking an agent is automatable and scalable.
<picture>
<img src="https://images.ctfassets.net/23aumh6u8s0i/6iunWzt8OsmBX5sLZcbpcY/6f14279e2a48c590c96d9cf91ad8a486/Attack_Business_School_101.png" alt="Agents inherit broad access from humans, reliable exploitation from software" style="width:85%; margin: 1em auto; border: solid black 0px; border-radius: 0px;">
</picture>
<div style="width:85%; margin: 1em auto; font-size: 0.80em;">*Agents inherit broad access from humans and reliable exploitation from software*</div>

Traditional software is exploitable but has narrow permissions and is siloed. Humans have broad access but require significant investment to successfully exfiltrate. Agents combine broad access with reliable exploitation.

Agents turn a collection of app permissions into a single workflow permission. A composite permission surface where individually authorized steps become dangerous when chained. Give an agent employee-equivalent access, and you get employee-equivalent blast radius, with software-like repeatability.

## Prompt Injection is Remote Code Execution

From October 2025 through January 2026, researchers observed [over 91,000 attack sessions targeting AI infrastructure](https://www.esecurityplanet.com/threats/ai-deployments-targeted-in-91000-attack-sessions/). Systematic reconnaissance of LLM endpoints. Active attacks in production.

The [EchoLeak vulnerability](https://thehackernews.com/2025/06/zero-click-ai-vulnerability-exposes.html) in Microsoft 365 Copilot showed zero-click exfiltration.

<iframe width="560" height="315" src="https://www.youtube.com/embed/LwyUFGSgSz0?si=tqGj8qOYDzlfvZBx&autoplay=1&mute=1&loop=1&playlist=LwyUFGSgSz0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

1. An attacker sends an email with hidden instructions.  
2. Copilot ingests the malicious prompt.  
3. It extracts data from OneDrive and SharePoint.  
4. It exfiltrates via trusted Microsoft domains.

No clicks required. CVSS score 9.3. Microsoft patched it in June 2025 before mass exploitation, but the vulnerability class remains open across the industry.

If we evaluate EchoLeak through an authorization lens: the agent had permission to access OneDrive, permission to access SharePoint, and permission to send to Microsoft domains. Every individual action was authorized. The combination was catastrophic.

Prompt injection is defined as an attack vector that targets the large language model's weaknesses. In the case of agents, it expands into full-blown remote code injection, leveraging the agent's autonomy to execute an arbitrary set of tasks across systems.

## Safe vs. Secure

The intuitive response to reducing agent risk is to add restrictions: read-only modes, egress blocking, mandatory human approval for sensitive actions. Reduce the attack surface, reduce the risk. It's the standard security playbook.

We ran attack simulations to test this assumption.

<iframe width="560" height="315" src="https://www.youtube.com/embed/ELi_UneFPPA?si=UkF3GdXoKvuoXDD7&autoplay=1&mute=1&loop=1&playlist=ELi_UneFPPA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

The more services an agent can access, the more useful it is, but also the larger the attack surface. Restricting an agent to one system makes it safe but useless. You've just rebuilt a single-purpose app with extra steps.

<picture>
<img src="https://images.ctfassets.net/23aumh6u8s0i/7r6L9wZ7yxaRHSq26AlXdm/8b93205b957ccb164aa82b2b043fc726/Attack_Trade_Off.png" alt="Results from 10,000 simulated attack runs across agent archetypes" style="width:85%; margin: 1em auto; border: solid black 0px; border-radius: 0px;">
</picture>
<div style="width:85%; margin: 1em auto; font-size: 0.80em;">*Results from 10,000 simulated attack runs across agent archetypes*</div>

The tradeoff curve is brutal. At one extreme: agents with broad access that can actually do work, but are vulnerable to prompt injection cascading across systems. At the other: agents that can't be exploited mimic applications.

Users aren't waiting for security to catch up. OpenClaw's success isn't despite the broad access: it's because of it.

*You cannot rely on the agent to decide whether it should take a dangerous action. You must prevent it at the authorization layer.*

The answer isn't choosing a point on this curve. It's changing the curve entirely.

## The Authorization Inversion

For 20 years, IAM asked two questions: who are you, and what can you do? This architecture fundamentally relies on the isolation provided by applications.

We can no longer rely on identity and coarse-grained access control alone. For secure agents, we need a task-based authorization model, where what the agent can access changes dynamically based on the job it's performing.

An agent summarizing an email doesn't need access to the entire mailbox.

The urgency is real: [97% of organizations with AI breaches lacked proper access controls](https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications,-97-of-which-reported-lacking-proper-ai-access-controls). Not authentication failures. Authorization gaps for a new type of principal.

**Context becomes the credential. Intent evaluation becomes the perimeter.**

## What the Solution Looks Like

Agent authorization requires new capabilities and patterns now taking shape. The goal is to let an agent complete its job, while constraining it at the workflow level.

From an architecture perspective, the key move is separating *what the agent can do in theory* from *what it is permitted to do right now*. That requires shifting from static, app-scoped permissions to continuous, action-scoped decisions.

Practically, a secure agent architecture is built on four interconnected layers: Identity, Access, Interoperability, Auditability.

<iframe width="560" height="315" src="https://www.youtube.com/embed/Jjd-NVi0jQw?si=Xtzp6Q3nzFvDoi4x&autoplay=1&mute=1&loop=1&playlist=Jjd-NVi0jQw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

**Identity**: Agents need their own dedicated identity, as first-class principals that can represent all types of agents, not just workflows with clear ownership, credentials, lifecycle, rotation, and policy enforcement.

**Access**: Agent's access must be task-scoped, it should be granted just-in-time, narrowly scoped to a single action (or small set of actions), and short-lived. In the agent world, every tool call becomes an authorization decision.

This layer needs to be able to provide the agent autonomy for routine work, but require the right human approval when the risk spikes.

**Interoperability**: Agents need to interact with services, and applications that enterprises use today, these services were not designed to be used by agents.

To grant them access securely, an interoperability layer sits in between the legacy services, and agents that enforces the access and identity layer protections can be extended to legacy services.

**Auditability**: Security teams need workflow-level provenance to provide critical visibility when an agent performs an action. Without that chain of custody, the incident response collapses into “the agent did it.”

In this architecture, prompts are not a security boundary. The enforcement lives in identity and authorization.

## How We Are Building This at Auth0 and Okta

[**Agent Identity**](https://www.okta.com/solutions/secure-ai/) treats agents as first-class principals with dedicated lifecycle, credentials, and policy enforcement. A new principal type alongside users and applications. No more shoehorning agents into user accounts or service principals.

[**Fine-Grained Authorization (FGA)**](https://auth0.com/fine-grained-authorization) enables context-dependent rules that evaluate action, resource, and intent together. Not "can this identity access CRM" but "can this agent read this customer record for this purpose at this moment."

[**Token Vault**](https://auth0.com/features/token-vault) provides secure credential storage so agents never hold long-lived secrets. Credentials retrieved at action time, scoped to the task, expired immediately after. The credential that worked five minutes ago won't work now.

[**Cross-App Access (XAA)**](https://www.okta.com/integrations/cross-app-access/) manages consent and delegation when agents operate across multiple systems, with attestation chains showing where credentials came from and how permissions were narrowed. We're driving this through [IETF standards work](https://datatracker.ietf.org/doc/draft-ietf-oauth-identity-assertion-authz-grant/). No more opaque token exchanges between apps.

[**CIBA-based step-up**](https://auth0.com/ai/docs/intro/asynchronous-authorization) ensures the right human is in the loop for high-risk actions. The agent proceeds autonomously for routine operations, but we request human confirmation when stakes are high. Autonomy where it helps, oversight where it matters.

We're building these with enterprise design partners now.

## Refusing the False Choice

The internet is shifting from humans clicking through interfaces to autonomous software acting across hundreds of connected systems. The answer isn't adding silos back: it's authorization that understands context.

An agent should access Salesforce and Workday in the same workflow, but not export customer data to an external address when the instruction came from an email attachment. That requires one question, evaluated continuously:

**Should this action, in this context, at this moment, be permitted?**

The technology to solve this exists today. Learn how Okta and Auth0 secure AI agents at [okta.com/ai](https://okta.com/ai) and [auth0.com/ai](https://auth0.com/ai). We are here to help you innovate with AI without having to worry about security. [Collaborate with us](https://okta.com/contact).  

<FAQs>
  <FAQ>
    <FAQQuestion>Why are AI agents both useful and dangerous?</FAQQuestion>
    <FAQAnswer>AI agents are valuable because they break down silos between applications, jumping between systems like Salesforce, Zendesk, and Gmail in a single workflow. However, the same broad access that makes them productive also removes the service boundaries that protect systems. A compromised agent with access to multiple systems creates a much larger blast radius than a compromised single application.</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>How are AI agents riskier than human employees?</FAQQuestion>
    <FAQAnswer>Agents inherit broad discretionary access from humans and deterministic, reliable exploitation from software. Unlike phishing a human, which is probabilistic and labor-intensive, attacking an agent is automatable and scalable. Agents turn a collection of individual app permissions into a composite permission surface where individually authorized steps become dangerous when chained together.</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>What is prompt injection and why is it dangerous for AI agents?</FAQQuestion>
    <FAQAnswer>Prompt injection targets an LLM's weaknesses to manipulate its behavior. For agents, it expands into full remote code injection, leveraging the agent's autonomy to execute arbitrary tasks across systems. The EchoLeak vulnerability in Microsoft 365 Copilot demonstrated zero-click data exfiltration where every individual action was authorized, but the combination was catastrophic.</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>What is the tradeoff between AI agent usefulness and security?</FAQQuestion>
    <FAQAnswer>The more services an agent can access, the more useful it becomes, but the larger the attack surface grows. Restricting an agent to one system makes it safe but essentially rebuilds a single-purpose app. The solution isn't choosing a point on this tradeoff curve but changing the curve entirely through task-based authorization models.</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>What is task-based authorization for AI agents?</FAQQuestion>
    <FAQAnswer>Task-based authorization dynamically adjusts what an agent can access based on the job it's performing. Instead of relying on static, coarse-grained access control, it separates what an agent can do in theory from what it's permitted to do right now. An agent summarizing an email, for example, doesn't need access to the entire mailbox.</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>What are the four layers of secure AI agent architecture?</FAQQuestion>
    <FAQAnswer>Secure agent architecture is built on four interconnected layers: Identity (agents as first-class principals with dedicated credentials and lifecycle), Access (task-scoped, just-in-time, short-lived permissions), Interoperability (bridging agents to legacy services with enforced protections), and Auditability (workflow-level provenance providing chain of custody for agent actions).</FAQAnswer>
  </FAQ>
  <FAQ>
    <FAQQuestion>How do Auth0 and Okta secure AI agents?</FAQQuestion>
    <FAQAnswer>Auth0 and Okta provide Agent Identity for first-class agent principals, Fine-Grained Authorization (FGA) for context-dependent access rules, Token Vault for secure short-lived credential storage, Cross-App Access (XAA) for managing consent across multiple systems, and CIBA-based step-up for human-in-the-loop approval on high-risk actions. These are backed by IETF standards work.</FAQAnswer>
  </FAQ>
</FAQs>