Evaluate the quality of a document as a finished artefact using the eight-dimension framework, scored by you.
Most documents get evaluated informally. Someone reads them, forms a view, and sends feedback. What that process rarely produces is a consistent, repeatable assessment that separates what is genuinely strong from what merely looks polished.
This tool provides that structure. It walks you through eight dimensions of quality, asks you to score the document you are looking at, and produces a record of your evaluation. The framework does the framing; you bring the judgement.
All evidence is drawn from the document itself. Neither the development conversation nor any knowledge of how the document was produced is required.
You will need the document you want to evaluate. That is the only essential input.
The seven opening questions establish the evaluation parameters: your relationship to the document, whether AI involvement is known, the document's type and stage, and what context, if any, is available beyond the document itself.
If you are evaluating as an independent assessor with no context, simply note that. The framework adjusts accordingly. The evaluation takes around fifteen to twenty minutes for a full review.
Begin with the seven opening questions. Your answers calibrate how the framework is applied and are recorded in the evaluation report.
The framework then presents eight dimensions. For each one, select the band that best describes what you find in the document. A notes field is available for each dimension if you want to capture your thinking.
The AI Voice measure comes last. How it applies depends on your answer to Question 2 — whether AI involvement is known, suspected, or unknown. Work through all dimensions in order.
Each dimension is scored across five bands.
Adequate means the document passed a minimum threshold and nothing more. Capable is genuinely strong. Exemplary means nothing more could reasonably be asked of the document at this standard. Score what you actually find, not what you were hoping for.
At the end, you can download a structured PDF of your evaluation. It records your opening question answers, your scores, your notes, and a summary, formatted for reference or sharing.
The report declares the operating mode used — inference or informed — so that any reader of the report understands the evidential basis for every score.
Answer all seven questions before scoring. Your answers calibrate the evaluation and are recorded in the report.
Work through each dimension in order. Select the band that best describes what you find in the document. Add notes if they would be useful to you later.
How this measure applies depends on your answer to Q2. Scoring applies when AI involvement is known. Detection and characterisation applies when AI involvement is suspected or unknown.
In your own words, what are the two or three most significant things you found? What single change would most improve the document?
Draw on your dimensional scores and notes. You do not need to repeat every detail, just the things that matter most.
| Dimension | Band | Score |
|---|---|---|
| Fit to Context | — | — |
| Evidence and Grounding | — | — |
| Analytical Depth | — | — |
| Purposeful Structure | — | — |
| Appropriate Register | — | — |
| Critical Integrity | — | — |
| Internal Consistency | — | — |
| Completeness against Evident Purpose | — | — |
| AI Voice | — | — |
When you are satisfied with your evaluation, generate your PDF record.