“Prompt Engineering” is a book that makes it clear from the very beginning who it is written for. And that’s good — because this is not a handbook for a casual chat user who just wants to “talk better with AI.” It’s primarily a book for developers and prompt engineers who actually build on top of models and need to understand how they work, what their limitations are, and what good practices look like so they can consistently produce better prompts. A typical user will quickly feel overwhelmed — and understandably so, because a big part of the material requires solid familiarity with LLMs, including API-level concepts.
Let’s start with the less pleasant aspects. There are sections that feel overcomplicated or unnecessarily lengthy. And occasionally you stumble upon strange phrases like “the common sense of the LLM,” which don’t quite match.
But once you get past these weaker parts, you’ll notice that the authors have a thoughtful and transparent approach. They discuss different techniques, presenting both their strengths and weaknesses. This helps the reader understand exactly where certain recommendations come from. It builds real awareness and skill — not just “apply this rule,” but why you should apply it.
A big plus goes to the visual examples — many of the discussed concepts are illustrated, and it works extremely well. Ideas that initially sound abstract suddenly become intuitive. There are also exercises that force you to think and help you organize the knowledge.
I also appreciate the authors’ approach: before you start crafting prompts, you must first understand how the LLM works. Why it behaves the way it does. Where incorrect answers come from. How tokenizers influence prompt precision. It sounds academic, but in practice it makes prompt work significantly easier. At first, it may feel overwhelming — especially since the introduction is quite long — but I know this pain well; the same thing happens during the trainings I run.
Some elements are genuinely fresh, for example, the dynamic construction of system prompts. Honestly, I haven’t seen this discussed in other publications yet.
A big plus for the references to research papers. They make it easy to trace the sources and explore topics more deeply, which is especially valuable in such a rapidly evolving field.
Unfortunately, the book is partially outdated. You can see a strong focus on GPT-3, which makes some details obsolete today. There are suggestions that simply no longer apply (e.g., regarding “echo”). On the other hand, this also highlights which concepts turned out to be timeless. The section on tool calling, for example, doesn’t mention MCP, but it essentially describes the mechanism that MCP implements. So even though the book discusses older technologies, it doesn’t diminish its practical value.
Summary:
This is an uneven book — at times too broad, sometimes overly verbose — but still packed with concrete, practical, and well-explained guidance. If you build prompts professionally, it will give you a solid foundation. If you use LLMs casually, this isn’t the book for you.
But for an engineer: absolutely worth it.