The Security Toolkit for LLM Interactions
-
Updated
Jun 10, 2024 - Python
The Security Toolkit for LLM Interactions
Short list of indirect prompt injection attacks to bypass Azure OpenAI's Prompt Shield.
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.
Detecting malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers
Formalizing and Benchmarking Prompt Injection Attacks and Defenses
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
💼 another CV template for your job application, yet powered by Typst and more
Every practical and proposed defense against prompt injection.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
A benchmark for prompt injection detection systems.
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
PromptyAPI, people's LLM-based applications security layer
Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for GPT-4, Claude, Llama3, Gemini, and other high-performance open-source LLMs.
Dropbox LLM Security research code and results
Curated + custom prompt injections.
My solutions for Lakera's Gandalf
A Python package designed to detect prompt injection in text inputs utilizing state-of-the-art machine learning models from Hugging Face. The main focus is on ease of use, enabling developers to integrate security features into their applications with minimal effort.
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."