Which symptom are you actually fixing?
A common mistake we see is opening the flashiest tool first. Start from the failure you can describe in one sentence—then pick the page that matches.
- “The model ran out of room / forgot the top of the thread.” Start with the Token Counter, then Text Chunk Splitter if you are feeding long sources.
- “The answer shape is wrong—JSON vs. prose, bullets vs. essay.” Prompt Formatter plus an explicit output contract you write yourself (we will not guess it for you).
- “Two phrasings behave differently and I cannot see why.” Prompt Comparison to diff wording side by side before you burn another run.
- “Same briefing every Monday for different clients.” Template Builder + Variable Injector.
- “Blank page before the first draft exists.” Content Outline Generator, then edit the outline by hand before you lean on the model for prose.
A grounded week: tokens, templates, and a forked prompt
In practice, teams that ship reliably treat prompts like versioned config. Here is a composite workflow stitched from what we see when people use several of these pages in one sitting—your mileage varies by model and policy.
- Monday: Marketing drops a 12-page PDF of product claims. You run the text through the Token Counter and Chunk Splitter so each section fits the window you target for summarization.
- Tuesday: You lock a reusable brief in the Template Builder, inject account name and tone with the Variable Injector, and store v1 in your notes.
- Wednesday: Output is readable but mushy. You duplicate the prompt, tighten constraints in the Prompt Improver, and line both up in Prompt Comparison so the team sees exactly what changed before anyone re-runs the expensive model.
- Thursday: A contractor needs a starter—point them at the Prompt Library or Role Prompt Generator so they inherit your house style instead of improvising.
Tokens, context windows, and the invisible cut
Every model ships with a maximum context size. Past that point, earlier instructions, examples, or pasted code may be truncated or compressed in ways you do not see in the UI. That is the root cause of “it ignored my rules” after someone pastes a long brief and still expects the opening system message to stay intact.
Before you send a wall of text to ChatGPT or wire an API call, run it through the Token Counter and the Text Chunk Splitter. In most real cases, chunk boundaries matter as much as token count—splitting on headings preserves more meaning than blind fixed-size slices.
Failure modes worth designing around
- Ambiguous success criteria. If you never define format (bullets vs. table vs. JSON), the model will pick one—and it may not match your downstream parser.
- Conflicting constraints. “Be exhaustive but under 80 words” is a trap. Pick the constraint that actually matters for the consumer of the output.
- Stale or thin grounding. Models can sound confident with wrong specifics. When facts matter, paste the source excerpt; when they do not, say so explicitly.
- Role confusion. Jumping between “senior editor” and “junior intern” tone in the same thread invites inconsistent voice. Lock a persona when you care about style.
Prompt writing that survives real projects
Opinionated stack: outcome and audience first, constraints second, examples third—not the other way around. One high-quality example beats five vague adjectives. If you reuse the same shape of task weekly, promote it to a template so you stop retyping boilerplate.
When you are iterating, fork prompts instead of mutating in place. The Prompt Comparison tool makes differences visible; the Prompt Improver is for tightening structure after you already know what went wrong in v1.
Outside the model: counting and diffing
When you need raw stats or diffing outside the model, use Word Counter and Text Compare from Text Tools.