Agent Red Team
Sign in

Agent Red Team

Before you ship your AI agent, find out if someone can make it do things you didn't authorize.

Paste your agent's setup. Find out exactly how someone could trick it into unauthorized actions. Get specific fixes for each one.

See a sample report

For founders and engineers shipping AI agents with real tools and permissions.

123 tests across20 attack packs31 code checks per reportCovers the OWASP top 10 for AI agents
Sample findingExample
critical

Agent exfiltrates Stripe keys and user data via email

A poisoned tool response tricks the agent into reading your Stripe secret key, querying your Supabase users table, and emailing both to an external address.

Poisoned tool outputReads Stripe sk_live_*Queries users tableEmails to attacker

Fix: Redact secrets from tool outputs. Restrict email recipients to allowlisted domains. Add table and row limits to database queries.

1 critical
2 high
31/31 checks passed

Test your agent

Paste your agent's setup. Find out how someone could trick it.

50 more characters needed

50KB max

Whatever defines what your agent can do. 50KB max.

The problem

When your agent has real tools, the mistakes cost real money.

Agent leaks data using a tool it's allowed to use

Someone hides instructions in a document. Your agent follows them and sends customer data to an outside address.

Approval step exists but never fires

You require human review for big actions. Someone words the request so the agent thinks it's routine. The review step never triggers.

Two safe tools combine into something harmful

Your agent can read a database and send messages. Someone tricks it into reading sensitive records and forwarding them.

Agent follows fake instructions from memory

Someone plants instructions during an earlier chat. Next time the agent runs, it follows those fake instructions as if they were real.

Someone says "I'm the admin" and the agent believes it

Your agent gives extra permissions to admins. Someone just claims to be one. There's no real check.

Why this is different

Other tools test what the AI says. ART tests what the AI agent does.

AI agents now have real tools: they make trades, send emails, access databases. The danger is not a bad answer. It is an action nobody approved.

Other tools: Scan for a long list of vulnerability types

ART: Test the 6 specific ways agents get tricked

Other tools: Send tricky prompts and check the output

ART: Test your agent's setup against real attack patterns

Other tools: Give you a risk score

ART: Show each problem, the attack path, and how to fix it

Other tools: Use AI to judge whether findings are real

ART: Validate every report with automated code checks

What ART tests for

123 tests for the ways AI agents actually break.

Skipped approvals

Your agent has rules about when to ask permission. We test whether someone can get around them.

Tools used wrong

Your agent has tools with limits. We test whether someone can push them past those limits.

Fake identity claims

Your agent trusts certain roles. We test whether someone can fake that trust.

Dangerous tool chains

Your tools are safe alone. We test whether they combine into something harmful.

Poisoned memory

Your agent reads from memory or documents. We test whether someone can plant instructions there.

Bad data from tools

Your agent trusts data from its tools. We test whether someone can hide instructions in that data.

How it works

01

Paste your agent's setup

The instructions you gave your agent, its list of tools, its rules. Whatever describes what it can do. No code needed. No passwords or keys.

02

ART tries to break it

We pick the tests that match what your agent can do and run them against your setup. Tests cover all 6 attack families.

03

Get a report with fixes

Each issue shows what the trick is, what tool or permission is affected, how bad it is, and exactly what to change.

The report

Problems and fixes. Not just a score.

Each issue shows exactly how someone could trick your agent, which tool or permission is involved, and what you need to change.

How the trick works

Step by step: what the attacker does, how the agent gets tricked, and what bad thing happens.

How serious it is

Every issue has a severity level with a written explanation. Not just a number. A reason you can read.

How to fix it

A specific change to your agent's setup, permissions, or rules. Something you can do today.

What's still risky

What problems remain after the fix, and how confident ART is in each finding.

Pricing

Free to try. Pay when you need full coverage.

Free

$0

Quick check before launch

  • +1 scan per day
  • +Core tests
  • +Problems + fixes
Start free

Adversarial

$19one-time

Full test before launch

  • +1 deep scan
  • +All 123 tests
  • +Full attack paths + fixes
  • +Downloadable report
Buy scan

Builder

$49/month

Test before every update

  • +30 scans/month
  • +All 123 tests
  • +Full attack paths + fixes
  • +Downloadable report
Subscribe

Team

$149/month

For teams shipping agents

  • +150 scans/month
  • +All 123 tests
  • +Downloadable report
  • +Faster results
Subscribe

How we handle your data

Your setup stays private. Not shared. Not used for training.

Your setup stays safe

ART can only read what you paste and return a report. It cannot run code, access the internet, or do anything else.

Not shared or trained on

Your config is stored securely during analysis. It is never shared with third parties or used for model training.

Industry standard coverage

Tests are mapped to the official top 10 list of AI agent security risks, published by OWASP.

FAQ

What do I need to provide?

The instructions you gave your agent, its list of tools, and any rules it follows. Paste it in or upload a file. No code needed.

How is this different from other AI security tools?

Most tools scan for a long list of generic AI problems. ART only tests one thing: can someone make your agent do something you didn't allow? We run 123 tests that cover the exact ways this happens.

Does AI check the AI?

The analysis uses AI. The quality check does not. Every report passes 31 automated code checks. If the evidence is weak, the report gets rejected and re-run.

How long does it take?

A few minutes for a standard scan. Deep analysis takes longer because it runs more tests.

Is the output actually useful?

Each finding shows the trick, which tool is affected, how serious it is, and what to change. You can hand it directly to the person who needs to fix it.

What if my agent already has backend protections?

ART analyzes your prompt and setup, not your backend code. If you already have server-side protections like rate limiting or authentication, the report tells you which findings to check against your actual backend.

Before launch, find out what someone can make your agent do.