Modify HTML to Multi-page HTML
Modify a python script that outputs a single html report to generate multiple page html.
Raw Prompt
Instead of generating a single output html file, the managers want the output to be muliple pages. The overall score and P/F summary on the first page, then links for the five criteria (detection, position accuracy, etc) linking to a different page dedicated to each. Write a prompt for an ai agent that have access to this repo to make these changes.
Prompt for AI agent (multi-page report HTML)
You have access to the repo at c:\Git\afi-evaluation-tool (Windows). Implement multi-page HTML output for the evaluation report while preserving existing split JSON support and scoring consistency.
Objective
Replace the current single-file report.html dashboard with a multi-page HTML report:
-
Index page (
report.html):- Shows Total Score gauge and Pass/Fail requirements table (same logic/data as today)
- Shows high-level report metadata (scenario, generated_at, versions)
- Shows links/cards to each metric category page:
- Detection
- Position accuracy
- Doppler
- RCS
- Density & homogeneity
-
Category pages (one per metric category):
- Dedicated page containing only that category’s plots/sections
- Example filenames (stable, deterministic):
report_detection.htmlreport_position_accuracy.htmlreport_doppler.htmlreport_rcs.htmlreport_density_homogeneity.html
- Each category page should link back to the index page.
Constraints / must-haves
-
Split JSON compatibility must remain in
tools/report_json_to_html.py:- Legacy:
--input report.json - Split:
--input-meta report_meta.jsonand--input-dir <report_folder> - Reconstruct the same in-memory
report = {..., "metrics": {...}}structure and reuse plotting logic.
- Legacy:
-
Scoring consistency:
- Must use the shared scoring module (
src/report_scoring.py) for requirements + total score (do not duplicate scoring logic).
- Must use the shared scoring module (
-
Robustness:
- If a metric category JSON is missing/empty, that category page should still render with a clear “no data” note and the index page should still work.
- Keep the existing behavior: if Doppler scatter has no valid pairs, omit it.
-
Windows-friendly paths:
- Use
pathlib.Patheverywhere.
- Use
-
Backwards behavior:
- The report generation pipeline (
src/report_generator.py) should still call the HTML tool as best-effort and should now produce:report.html(index)- the 5 category html pages (next to it)
- The report generation pipeline (
Implementation guidance (preferred approach)
Refactor tools/report_json_to_html.py minimally:
-
Split HTML generation into two layers:
generate_index_html(report, title, page_links, …) -> strgenerate_category_html(report, category_key, title, …) -> str
-
Reuse existing plotting code:
- Extract a helper that builds figures for a single category, e.g.
_build_figures_detection(report) -> list[(title, div)]_build_figures_position_accuracy(report) -> …- etc.
- Or modify
_build_figures(report)to accept an optionalonly_categories=set([...])and filter sections accordingly.
- Extract a helper that builds figures for a single category, e.g.
-
Plotly JS handling:
- Today the single HTML includes Plotly JS inline once.
- For multi-page, simplest: each page can include Plotly JS inline (works but heavier).
- Better: write one shared
plotly.min.jsfile in the report folder and reference it from all pages (optional). - Pick one approach and keep it stable/deterministic.
-
CLI changes:
- Keep existing CLI flags working.
- Add optional flag:
--multi-page(default True if managers want it) or default to multi-page but keep--single-pagefor backward compatibility.
-
Output naming:
--outputshould still be supported; it refers to index HTML path.- Category pages should be written alongside the index output with predictable suffixes.
-
Update
src/report_generator.py:- Ensure it still invokes
tools/report_json_to_html.pyand that the tool now writes multiple pages. (No need for generator to know every filename.) - Keep best-effort behavior: evaluation shouldn’t fail if HTML generation fails.
- Ensure it still invokes
Acceptance criteria
- Running the tool in both modes works:
python tools/report_json_to_html.py --input reports/<run>/report.jsonpython tools/report_json_to_html.py --input-meta reports/<run>/report_meta.json
- Outputs written:
- Index:
report.html(score + requirements + links) - Category pages: one per metric category
- Index:
- Links between pages work when opened from disk (no server required).
- The index page shows exactly the same score/requirement statuses as before (uses shared scoring).
- Ruff/lint stays clean.
Testing requirements
- Update/add tests in
tests/test_report_outputs.py(or a new test file) to validate:- HTML tool produces index + 5 category pages
- Index contains “Total Score”
- Category pages contain
<htmland at least one expected section title when data exists
- Keep tests lightweight (no full evaluation run required).
Files likely to modify
tools/report_json_to_html.py(main change)src/report_generator.py(invoke tool as before; no heavy changes)tests/test_report_outputs.pyREADME.md(update usage examples and explain multi-page layout)
Deliver the changes as a clean implementation with minimal duplication and stable output filenames.