Sources & Attribution

Explore the research foundation behind our prompt engineering taxonomy, based on 20+ academic papers, cutting-edge security research, and the original compilation by Reddit user u/Background-Zombie689.

Original Reddit Post

Reddit Post April 2025

I Distilled 17 Research Papers into a Taxonomy of 100+ Prompt Engineering Techniques – Here's the List.

by u/Background-Zombie689

A comprehensive list of over 100 prompt engineering techniques, organized alphabetically and including source citations for each technique.

Research Papers Referenced

General Prompt Engineering Papers

  • Schulhoff et al. - "A Survey of Prompt Engineering" Comprehensive survey of prompt engineering techniques across domains.
  • Vatsal & Dubey - "Comprehensive Review of Prompt Engineering" Review focusing on state-of-the-art prompt engineering methods.
  • Ramnath et al. - "Automatic Prompt Optimization" Explores methods for automating prompt engineering.
  • Li et al. - "Optimization Survey" Survey of optimization methods for prompts.

Specialized Technique Papers

  • Wei et al. - "Chain-of-Thought Prompting" Original paper introducing CoT prompting.
  • Wang et al. - "Self-Consistency" Extension of CoT using multiple paths and voting.
  • Zhou et al. - "APE (Automatic Prompt Engineer)" Framework for automatically engineering prompts.
  • Ning et al. - "Skeleton-of-Thought" Method for outlining and expanding responses.
  • Yao et al. - "Tree-of-Thoughts" Framework for exploring multiple reasoning paths.
  • Lewis et al. - "Retrieval-Augmented Generation" Original RAG paper combining retrieval with generation.
  • Li et al. - "SCoT (Structured Chain-of-Thought)" Adding structure to CoT for code generation.
  • Liu et al. - "LogiCoT" Enhancing CoT with logical reasoning.
  • Ridnik et al. - "AlphaCodium" Test-based iterative flow for code generation.
  • Lee et al. - "Syntactic Prevalence Analysis" Method for analyzing prompt effects on syntax.

Domain-Specific Papers

  • Wang et al. - "Healthcare Survey" Survey of prompt engineering in healthcare.
  • Ding et al. - "Cross-File Code Completion" Code-specific prompting techniques.
  • Brown et al. - "Few-Shot Learning" Original work on in-context learning.
  • Ye et al. - "Prompt-Tuning" Methods for tuning prompts with continuous vectors.
  • Honovich et al. - "Instruction Induction" Automating the inference of instructions from examples.

LLM Security & Safety Papers

  • Beurer-Kellner et al. - "Design Patterns for Securing LLM Agents against Prompt Injections" Foundational paper introducing architectural security patterns for LLM agents.
  • Simon Willison - "Prompt Injection Security Research" Comprehensive analysis of prompt injection vulnerabilities and defense strategies.
  • Google DeepMind - "CaMeL Framework Research" Code-then-execute patterns for secure LLM workflows with taint tracking.
  • Wallace et al. - "Universal Adversarial Triggers for Attacking and Analyzing NLP" Seminal work on adversarial attacks against language models.
  • Anthropic - "Constitutional AI Research" Research on training models to self-correct harmful outputs using constitutional principles.
  • Various Authors - "Adversarial Training & Robustness Research" Collection of papers on training techniques to improve model robustness against attacks.

Our Contributions

Extending the Original Research

This website extends the original Reddit post by categorizing the prompt engineering techniques into a structured taxonomy with the following enhancements:

  • Organized techniques into logical categories based on function and purpose
  • Added detailed descriptions, examples, and use cases for each technique
  • New: Added comprehensive "Secure Agent Architectures" category with 12 security-focused techniques
  • Created visualizations to show relationships between techniques
  • Developed an interactive prompt builder to create effective prompts by combining multiple techniques
  • Built an interactive interface for exploring the taxonomy
  • Made all data available in structured JSON format for further research