🐢 Open-Source Evaluation & Testing for LLMs and ML models
-
Updated
Jun 6, 2024 - Python
🐢 Open-Source Evaluation & Testing for LLMs and ML models
LLM App templates for RAG, knowledge mining, and stream analytics. Ready to run with Docker,⚡in sync with your data sources.
The Security Toolkit for LLM Interactions
A secure low code honeypot framework, leveraging AI for System Virtualization.
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Agentic LLM Vulnerability Scanner
Papers and resources related to the security and privacy of LLMs 🤖
Formalizing and Benchmarking Prompt Injection Attacks and Defenses
AI-driven Threat modeling-as-a-Code (TaaC-AI)
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
A benchmark for prompt injection detection systems.
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Framework for LLM evaluation, guardrails and security
Risks and targets for assessing LLMs & LLM vulnerabilities
SecGPT: An execution isolation architecture for LLM-based systems
LLM security and privacy
安全手册,企业安全实践、攻防与安全研究知识库
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."