prompt-injection
Here are 54 public repositories matching this topic...
This repo focus on how to deal with prompt injection problem faced by LLMs
-
Updated
Oct 19, 2023 - Python
A new kind of MLOps platform purpose built for production generative ai apps
-
Updated
Sep 14, 2023
Happy Prompt is a unique tool designed to interject positive emotions into text prompts, allowing users to communicate joyful, uplifting, and enthusiastic expressions. It utilizes a series of cheerful emojis, symbols, and text representations to infuse the text with a sense of happiness, love, dancing, partying, and other upbeat themes.
-
Updated
Sep 3, 2023 - PHP
ChatGPT Adversarial Attack for The Pitt Challenge 2023
-
Updated
Aug 17, 2023 - TypeScript
Client SDK to send LLM interactions to Vibranium Dome
-
Updated
Mar 31, 2024 - Python
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
-
Updated
May 28, 2024 - JavaScript
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
-
Updated
Apr 12, 2024
PromptyAPI, people's LLM-based applications security layer
-
Updated
May 24, 2024 - Python
Prompt Engineering Tool for AI Models with cli prompt or api usage
-
Updated
Sep 10, 2023 - Python
Prompimix(PromptCrafter/ tp-cooker) is an innovative software application developed using JavaScript, CSS, and HTML, designed to streamline the process of creating text-to-image prompts. This intuitive web-based tool empowers users to effortlessly generate captivating visual prompts for a variety of applications.
-
Updated
Feb 24, 2024 - CSS
This project leverages the SDXL-Turbo model for versatile image processing tasks. Offering a simple command-line interface, it facilitates both Text-to-image and Image-to-image operations. Users select an operation, input prompts, and the script dynamically generates and executes code snippets.
-
Updated
Feb 15, 2024 - Python
Repo hosting the data and results of my research on LLM prompt injection resistance.
-
Updated
Feb 26, 2024 - Python
The Security Toolkit for LLM Interactions (TS version)
-
Updated
Jan 5, 2024
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
-
Updated
Mar 27, 2024
AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.
-
Updated
Mar 19, 2024
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
-
Updated
Oct 16, 2023 - Jupyter Notebook
Short list of indirect prompt injection attacks to bypass Azure OpenAI's Prompt Shield.
-
Updated
Jun 9, 2024
Detecting malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers
-
Updated
Jun 6, 2024 - Python
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.
-
Updated
Jun 8, 2024 - Go
Improve this page
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."