Preventing Threats to LLMs: Detecting Prompt Injections & Jailbreak Attacks Share: Download MP3 Similar Tracks Prompt Injection: When Hackers Befriend Your AI - Vetle Hjelle - NDC Security 2024 NDC Conferences Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs WhyLabs Mitigating LLM Risk in Insurance: Chatbots and Data Collection WhyLabs Creating a large dataset for pretraining LLMs by Guilherme Penedo Data Makers Fest Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs) WhyLabs Full interview: President Trump says he’ll be a ‘two-term president,’ downplays third-term talk NBC News Monitoring LLMs in Production using LangChain and WhyLabs WhyLabs Berkshire's 2025 annual shareholder meeting: Watch the full morning session CNBC Television Real-world exploits and mitigations in LLM applications (37c3) Embrace The Red Finding the Right Datasets and Metrics for Evaluating LLM Performance WhyLabs LLM Security Risks and Mitigation Strategies [Cloud Masters #117] DoiT Vision Opening Keynote | Lip-Bu Tan | Intel Business Intel Business What Is a Prompt Injection Attack? IBM Technology Andrew Ng Explores The Rise Of AI Agents And Agentic Reasoning | BUILD 2024 Keynote Snowflake Inc. Securing and Controlling LLM Applications With LangChain and WhyLabs WhyLabs From Eyeballing to Excellence: 7 Ways to Evaluate & Monitor LLM Performance WhyLabs How to Build & Sell AI Agents: Ultimate Beginner’s Guide Liam Ottley