A new kind of MLOps platform purpose built for production generative ai apps
-
Updated
Sep 14, 2023
A new kind of MLOps platform purpose built for production generative ai apps
Curated + custom prompt injections.
Happy Prompt is a unique tool designed to interject positive emotions into text prompts, allowing users to communicate joyful, uplifting, and enthusiastic expressions. It utilizes a series of cheerful emojis, symbols, and text representations to infuse the text with a sense of happiness, love, dancing, partying, and other upbeat themes.
ChatGPT Adversarial Attack for The Pitt Challenge 2023
Detecting malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
PromptyAPI, people's LLM-based applications security layer
Prompimix(PromptCrafter/ tp-cooker) is an innovative software application developed using JavaScript, CSS, and HTML, designed to streamline the process of creating text-to-image prompts. This intuitive web-based tool empowers users to effortlessly generate captivating visual prompts for a variety of applications.
Repo hosting the data and results of my research on LLM prompt injection resistance.
Prompt Engineering Tool for AI Models with cli prompt or api usage
The Security Toolkit for LLM Interactions (TS version)
This repo focus on how to deal with prompt injection problem faced by LLMs
Client SDK to send LLM interactions to Vibranium Dome
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
Short list of indirect prompt injection attacks to bypass Azure OpenAI's Prompt Shield.
LLM prompt injection detection
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
A serverless set of functions for evaluating whether incoming messages to an LLM system seem to contain instances of prompt injection; uses cascading cosine similarity and ROUGLE-L calculation against known good and bad prompts
My solutions for Lakera's Gandalf
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."