
lucataco / prompt-guard-86m
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM (Updated 10 months, 3 weeks ago)
Want to make some of these yourself?
Run this model