Reference index
History and base-model era
- How I Became OpenAI's First Prompt Engineer
- The Evolution of Prompts: From Completion to Systems
- Base Models vs Post-Training: What Each Layer Does
- Model Identity and Statelessness: Why Explicit Context Matters
- The Stateless AI Guessing Game: A Prompting Lesson in Memory
- GPT Demo Set List: Early Prompt Patterns That Still Hold Up
- Early Sentence-to-Email Prompts: A Foundational Transformation Pattern
- GPT-3 Emoji Story Demo: Narrative Compression in Tokens
- Separating Instruction from Content: A Core Prompt Reliability Pattern
- Magic Words in Prompting: Domain Terms That Steer Behavior
- Invoking Experts in Prompts: When Persona Framing Improves Results
- GPT-3 Grammar and Style Editing in Practice
- Magic Phrases for Moderation: Prompt Patterns That Improve Safety Calls
- GPT-3 for Regex, Bucket Policies, and Solidity Tasks
- Rethinking best_of in GPT-3: Why It Misleads
- The Fifth-Grade Summary Moment: Audience-Aware Compression
Scaffolding and long-form coherence
- Model Identity and Statelessness: Why Explicit Context Matters
- The Fifth-Grade Summary Moment: Audience-Aware Compression
- Prompt Repetition and Rephrasing: A Reliability Tactic That Lasts
- Context as an AI Lever: The Compounding Effect of Longer Windows
- Small Capabilities, Big Ramifications in Prompt Design
- Scaffolding Long-Form Content: Prompt Patterns for Coherence
- Radio Play Scaffolds: A Better Prompt Pattern for Story Generation
- Building AI Choose-Your-Own Adventures with Prompt Scaffolding
- Character-Threaded Summarization for Long Documents
- Memory in Conversational AI: Why Context Persistence Matters
- Outcome-Oriented Prompting: Define Success, Then Generate
- Style Guides for AI Writing: Getting a Specific Voice
- Crystallized vs Fluid Intelligence in Language Models
- Personal AI Evaluation Methods for Real-World Quality
- Context vs Retrieval: A Practical Decision Framework
- The Prompt Context Flywheel for Continuous Improvement
Reliability / hallucinations / self-checks
- Separating Instruction from Content: A Core Prompt Reliability Pattern
- Mini Prompts for Trick Questions and Nonsense Inputs
- Prompts to Reduce Hallucinations: Practical Control Patterns
- Cross-Temperature Hallucination Testing for Sanity Checks
- Temperature in LLMs Explained: What It Actually Controls
- Seeded Creativity for LLMs: Controlled Randomness That Helps
- Creating Better Quiz Distractors with LLMs
- Bracketing Letters for Wordle: Token-Level Prompt Control
- The Missing Bracket: How Tiny Formatting Errors Break Outputs
- Prompt Repetition and Rephrasing: A Reliability Tactic That Lasts
Formatting and “interface fixes” (schemas, separators, brackets)
- Early Sentence-to-Email Prompts: A Foundational Transformation Pattern
- GPT-3 Grammar and Style Editing in Practice
- GPT-3 for Regex, Bucket Policies, and Solidity Tasks
- Bracketing Letters for Wordle: Token-Level Prompt Control
- The Missing Bracket: How Tiny Formatting Errors Break Outputs
- Style Guides for AI Writing: Getting a Specific Voice
Cost / small models / throughput
- Prompt Size Reduction Checklist: Cut Tokens Without Losing Quality
- Small Model Advantages: When Smaller LLMs Outperform Bigger Ones
- Small Models, Big Knowledge: Prompting Past the First Guess
- Using Small Models for Complex Natural-Language Tasks
- Large Text Pattern Analysis with Prompted Models
- Prompt Maker: How to Teach Prompt Patterns by Example
- Compute at Scale: Growth, Limits, and AI Demand
- How Small Can AI Be? Practical Limits and Opportunities
- Cost Savings via Fine-Tuning Smaller Models
- Big and Small Models in Robotics: A Hybrid Architecture
Retrieval / embeddings / grounding
- Model Identity and Statelessness: Why Explicit Context Matters
- Context as an AI Lever: The Compounding Effect of Longer Windows
- Memory in Conversational AI: Why Context Persistence Matters
- Understanding Embeddings for Better Prompting and Retrieval
- Embedding-Based Retrieval Strategies That Actually Work
- Context vs Retrieval: A Practical Decision Framework
- Grounding Prompts with Wikidata and SPARQL
- The Prompt Context Flywheel for Continuous Improvement
Tools / agents / product workflows
- Building AI Choose-Your-Own Adventures with Prompt Scaffolding
- Fine-Tuning Fundamentals: When to Use It and When Not To
- Fine-Tuning Methods Guide: SFT, DPO, and Beyond
- Cost Savings via Fine-Tuning Smaller Models
- Model-Assisted Data Preprocessing for Better Fine-Tuning
- GPT Tools: Fast Prototypes, Real Constraints, and Shipping
- Tool Makers vs Tool Users: Where Product Value Actually Lives
- Lessons from an Ambitious AI Build
- Localization Techniques for Vision Models in Real Workflows
Vision / multimodal / robotics
- GPT Tools: Fast Prototypes, Real Constraints, and Shipping
- Discovering Useful Libraries with AI Coding Prompts
- Code Refactoring with GPT-3: Practical Prompt Patterns That Work
- Tool Makers vs Tool Users: Where Product Value Actually Lives
- Lessons from an Ambitious AI Build
- Why I Didn't Launch AI Channels
- Hackathons and Model Capabilities: What Fast Experiments Reveal
- GPT-4 Vision Refrigerator Demo: A Practical Multimodal Moment
- Localization Techniques for Vision Models in Real Workflows
- Vision Models at the Frontier: What Changed and Why
- Big and Small Models in Robotics: A Hybrid Architecture
- The Uneven AI Frontier: Why Capabilities Arrive Jagged
- The Frontier Is Wider Than It Looks
- Challenging AI Paper Claims with Practical Replication