The proliferation of generative AI and agentic systems is fundamentally reshaping the threat landscape. As organizations race to deploy these technologies, they simultaneously introduce a new, complex attack surface. For security professionals, AI red teamers, and LLM security specialists, understanding how to defend this evolving frontier is no longer optional—it is a critical imperative. Microsoft Ignite 2025, taking place in San Francisco from November 17–21, 2025, and online from November 18–20, 2025, is poised to be a pivotal event for addressing these challenges head-on.
This year’s conference is centered on an AI-first, end-to-end security platform. The agenda moves beyond traditional domains like identities, devices, and clouds to place a significant emphasis on securing the AI systems and autonomous agents that are becoming integral to modern infrastructure. Below is a technical breakdown of the core security themes that will be explored, providing a strategic overview for practitioners on the front lines of AI defense.
AI-Powered Security Operations: Augmenting the SOC with Agentic Defense
The modern Security Operations Center (SOC) is grappling with an explosion in signal volume and threat complexity, much of it driven by AI-powered attacks. This theme focuses on leveraging AI not just as a tool for analysis, but as a core component of a unified, predictive defense strategy. The sessions will explore how to integrate foundational security tools into a cohesive, AI-driven platform that automates response and reshapes SOC workflows.
Breakout Sessions
These deep-dive sessions will dissect the integration of Microsoft Sentinel, Microsoft Defender, and Microsoft Entra into the AI stack. Key areas of focus include:
- Agentic Workflows for Threat Response: Architectural analysis of how Microsoft Security Copilot agents can autonomously execute investigation and remediation tasks, reducing mean time to respond (MTTR).
- Predictive SOC Strategies: Moving beyond reactive alerting to implement AI-powered models that anticipate threat actor movements and pre-emptively harden defenses.
- Extending Zero Trust to AI Agents: Technical strategies for applying Zero Trust principles to both human and non-human identities, ensuring least-privilege access for AI agents interacting with critical systems.
- Integrated Security Foundations: Exploring how to build a unified control plane for security data and operations, consolidating signals from disparate tools for more effective AI-driven analysis.
Theater Sessions
These fast-paced, demo-heavy sessions will showcase practical applications and advanced techniques:
- Building and deploying custom Security Copilot agents to address unique organizational threats and workflows.
- Advanced threat hunting and automation techniques within Microsoft Sentinel, tailored for detecting sophisticated, multi-stage attacks.
- Implementing phishing-resistant authentication with passkeys to eliminate a primary vector for credential theft, which is often a precursor to more advanced attacks on AI infrastructure.
Hands-On Labs
These instructor-led labs offer practitioners the opportunity to move from theory to implementation:
- Simulating real-world attack scenarios and using Microsoft Defender XDR to orchestrate a cross-domain response.
- Implementing and validating Zero Trust policies across identities and endpoints in a controlled environment.
- Integrating Microsoft Purview with Microsoft Defender to achieve comprehensive visibility into data-centric threats.
Securing the AI Lifecycle: From Code to Cloud to Agent
As AI models and agentic systems move from development to production, securing them across their entire lifecycle is paramount. This track focuses on the unique security challenges of cloud-native and AI workloads, emphasizing proactive posture management and the governance of autonomous agents.
Breakout Sessions
Discussions will center on architecting secure AI systems in alignment with initiatives like the Microsoft Secure Future Initiative.
- Cloud-Native AI Workload Security: Leveraging Microsoft Defender for Cloud for proactive posture management (CSPM) and automated threat response for AI/ML environments.
- Secure Design for Agentic AI Systems: Threat modeling and security design patterns for autonomous agents, covering the full lifecycle from development and training to deployment and runtime.
- Agent Governance and Visibility: Exploring new platform capabilities for monitoring agent behavior, enforcing policies, and establishing least-privilege access to prevent unauthorized actions or capability escalation.
Theater Sessions
These sessions provide actionable guidance on specific implementation challenges:
- Practical steps for hardening the Microsoft Azure security posture specifically for AI/ML workloads.
- Aligning AI innovation with complex regulatory and compliance requirements using Microsoft Purview.
- Securing access to critical backend systems, such as SAP, for both human users and AI agents via Microsoft Entra ID Governance.
Hands-On Labs
Gain practical experience in mitigating the new generation of AI-centric threats:
- Threat mitigation exercises using Defender for Cloud in simulated AI environments.
- Maximizing Cloud Security Posture Management (CSPM) capabilities to identify and remediate vulnerabilities in AI infrastructure.
- A dedicated lab on safeguarding AI agents, focusing on implementing controls for visibility, access, and runtime behavior monitoring.
Comprehensive Data Security and Insider Risk in the Age of AI
Generative AI and copilots introduce novel data exfiltration and insider risk vectors. This theme addresses the critical need to protect sensitive data as it flows through AI applications and agents across multi-cloud and hybrid environments.
Breakout Sessions
Explore how to build a multi-layered data defense strategy that accounts for AI-specific risks.
- Layered Data Protection with Microsoft Purview: Implementing robust controls for data classification, labeling, and Data Loss Prevention (DLP) to prevent exfiltration through AI prompts or outputs.
- AI-Powered Data Security Investigations: Using AI to scale and accelerate investigations into complex data security incidents, correlating signals across endpoints, clouds, and applications.
- Secure Copilot Adoption: Best practices for deploying Microsoft Copilot with embedded safeguards to prevent inadvertent data leakage and mitigate insider risks associated with AI-assisted workflows.
Theater Sessions
Focused demonstrations on unifying data governance and security posture.
- Using Microsoft Purview Compliance Manager to unify security, compliance, and AI readiness into a single framework.
- Leveraging Microsoft Purview Data Security Posture Management to gain actionable intelligence and proactively strengthen data protection controls.
Hands-On Labs
Acquire the skills to manage and protect data in AI-driven environments.
- Creating and managing sensitive information types and labels to ensure accurate data classification.
- Implementing and fine-tuning insider risk management policies and adaptive protection for dynamic threat environments.
- Configuring and testing DLP policies across the Microsoft 365 ecosystem to prevent data loss via collaboration and AI tools.
For professionals tasked with defending against emerging AI threats, Microsoft Ignite offers a critical opportunity to gain hands-on experience and strategic insights. Conference passes are limited. Use RSVP code ATXTJ77W by October 20 to secure your registration before capacity is reached!