Show HN: A minimal TS library that generates prompt injection attacks
I made an open source, MIT license Typescript library based on some of the latest research that generates prompt injection attacks. It is a super minimal/lightweight and designed to be super easy to use.

Keen to hear your thoughts and please be responsible and only pen test systems where you have permission to pen test!

What are some good prevention mechanisms for this? A sort of firewall for prompts? I've seen people recommend LLMs, but that seems like it wouldn't work well. What is the industry standard? Or what looks promising at least?
Was the whole lib and website vibe coded? I can't find any instructions on how to use it, the repo is for the website itself and the readme is AI blurb that doesn't make me any wiser.

  // Test your AI system
  const results = await injector.runTests(yourAISystem);
???

Even the "prompt-injector" NPM package is something completely different. Does this project even exist?

  • HKayn
  • ·
  • 2 hours ago
  • ·
  • [ - ]
The project appears to be located inside the repo of the website: https://github.com/BlueprintLabIO/prompt-injector/tree/main/...
[dead]
The meat seems to be in https://github.com/BlueprintLabIO/prompt-injector/tree/main/..., the generation could be done without any UI but then it probably would not look so flashy.
  • HKayn
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Why did you use something as heavy as SvelteKit for a website with a single page? This doesn't inspire confidence.
[dead]