Important
Are you a speaker delivering this session? - Start by reviewing the session delivery guidance to download the slides, setup demos, and familiarize yourself with the delivery.
Enterprise-grade agentic AI solutions need to proactively assess risks and mitigate attacks from adversarial users. Learn how the AI Red Teaming agent in Microsoft Foundry helps you shift left to improve your security posture & build customer trust in your agentic AI solutions.β
By the end of this session, attendees should be able to:
- Explain what red teaming is - and why adversarial testing is key to trustworthy AI.
- Understand popular risk categories and attack strategies used by adversaries.
- Describe how the AI Red Teaming Agent works to safeguard agentic AI solutions.
- Plan and execute AI Red Teaming Agent scans (local & cloud) - and analyze the reports.
- Microsoft Foundry - Unified platform for end-to-end development of enterprise-grade agentic AI.
- AI Red Teaming Agent - Automated tool for proactively identifying safety risks in generative AI systems
- PyRIT - Python Risk Identification Tool for generative AI red teaming and adversarial testing
| Resources | Links | Description |
|---|---|---|
| Browse the documentation | Microsoft Foundry - AI Red Teaming Agent) | AI Red Teaming overview & getting started with local & cloud scans |
| PyRIT Documentation | Python Risk Identification Tool (PyRIT) | Microsoft's open-source framework for AI red teaming |
| Red Teaming Technical Blog | Assess Agentic Risks with AI Red Teaming Agent | Latest features & capabilities from Microsoft Ignite (Nov 2025) |
| Pre-Recorded Session Video | Model Mondays Replay (Dec 2025) | Walkthrough of slides & demo by Minsoo Thigpen (Core AI PM) |
| Hands-on Dev Workshop | Safeguard Your Agents Workshop | 90-minute workshop run at Microsoft Ignite 2025 (step-by-step) |
| Resources | Links | Description |
|---|---|---|
| AI Tour 2026 Resource Center | https://aka.ms/AITour26-Resource-Center | Links to all repos for AI Tour 26 Sessions |
| Microsoft Foundry Community Discord | Connect with the Microsoft Foundry Community! | |
| Learn at AI Tour | https://aka.ms/LearnAtAITour | Continue learning on Microsoft Learn |
languages will go here when its time to localize
![]() Nitya Narasimhan π’ |
![]() Minsoo Thigpen π’ |
![]() Sydney Lister π’ |
You might need an Azure subscription to follow the steps in this repo. π Start your free journey here: https://aka.ms/devrelft
This Azure Free Trial provides $200 credit for 30 days. Some features may incur costs after the trial. Check the Azure pricing calculator to estimate costs.
Important
Free Tier Limitations: The Azure free subscription has significant constraints that may prevent full implementation of this repo:
- Model access: Some advanced models (e.g., GPT-5, Claude) may not be available or have very limited quotas
- Rate limits: Strict API call limits (e.g., requests per minute, tokens per day)
- Region restrictions: Free tier resources may only be available in limited regions
- Feature restrictions: Some Microsoft Foundry features (agent orchestration, evaluations) may require pay-as-you-go
- Credit exhaustion: $200 credit can be consumed quickly with heavy AI model usage
Recommendation: For full functionality, consider a pay-as-you-go subscription or request access to Azure for Students ($100 credit, no credit card required) or the Microsoft for Startups Founders Hub.
Microsoft is committed to helping our customers use our AI products responsibly, sharing our learnings, and building trust-based partnerships through tools like Transparency Notes and Impact Assessments. Many of these resources can be found at https://aka.ms/RAI. Microsoftβs approach to responsible AI is grounded in ourβ―AI principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Large-scale natural language, image, and speech models - like the ones used in this sample - can potentially behave in ways that are unfair, unreliable, or offensive, in turn causing harms. Please consult the Azure OpenAI service Transparency note to be informed about risks and limitations.
The recommended approach to mitigating these risks is to include a safety system in your architecture that can detect and prevent harmful behavior. Azure AI Content Safety provides an independent layer of protection, able to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful. Within Microsoft Foundry portal, the Content Safety service allows you to view, explore and try out sample code for detecting harmful content across different modalities. The following quickstart documentation guides you through making requests to the service.
Another aspect to take into account is the overall application performance. With multi-modal and multi-models applications, we consider performance to mean that the system performs as you and your users expect, including not generating harmful outputs. It's important to assess the performance of your overall application using Performance and Quality and Risk and Safety evaluators. You also have the ability to create and evaluate with custom evaluators.
You can evaluate your AI application in your development environment using the Azure AI Evaluation SDK. Given either a test dataset or a target, your generative AI application generations are quantitatively measured with built-in evaluators or custom evaluators of your choice. To get started with the azure ai evaluation sdk to evaluate your system, you can follow the quickstart guide. Once you execute an evaluation run, you can visualize the results in Microsoft Foundry portal .



