The problem with raw NGS reports
A solid-tumor NGS panel generates hundreds of pages of output: variant calls, VAF percentages, coverage metrics, copy-number data, fusion calls, and pages of negative findings. That output is designed for bioinformatics staff to interpret — not for the oncologist who has 15 minutes with a patient and needs to know which drug to prescribe.
The gap between “we sequenced the tumor” and “we gave the oncologist something actionable” is where most regional diagnostic labs lose business to Tempus and Foundation Medicine. NGS report summarization software closes that gap.
What UNMIRI's summarization engine actually produces
A structured, consult-ready 2-page document with four distinct components:
- Top 3 treatment recommendations, ranked by OncoKB evidence levels (1, 2, 3A, 3B, 4) with FDA approval status and dosing.
- Contraindications flagged with the variant-level reasoning — e.g., checkpoint inhibitor monotherapy flagged when EGFR + low PD-L1 are both present.
- One matched open clinical trial with eligibility criteria, nearest enrolling site, and a QR code linking to the full trial record.
- Full citation trail back to OncoKB entries, FDA drug labels, and landmark RCTs so the oncologist or your bioinformatician can audit any claim.
| What the oncologist receives | Raw NGS report | With UNMIRI |
|---|---|---|
| Pages to read | 300–500 | 2 |
| Time to clinical decision | 30–90 min manual review | Under 3 sec; ~5 min oncologist read |
| Evidence grading | Not prioritized | OncoKB evidence-level badges per recommendation |
| Contraindications | Implicit in biomarker section | Explicitly flagged with rationale |
| Trial matches | None (requires separate search) | Variant-eligibility matched |
| Citations | Occasional | Every claim traceable |
| Bioinformatician time per case | 2–4 hrs curation | ~10 min QA review |
How the engine works
UNMIRI is deliberately not a general-purpose LLM summarizer. Generic vector-RAG over clinical text conflates near-miss variants (BRAF V600E vs. V600K have different approved drugs) and hallucinates citations. Our architecture separates reasoning from formatting:
- A knowledge graph — Neo4j, grounded in OncoKB 2026-Q1, ClinVar 2026-03, ClinicalTrials.gov, and openFDA drug labels — does the clinical reasoning by traversing explicit variant → drug → evidence-tier → contraindication relationships.
- Deterministic templates render the graph output into the final 2-page cheat sheet. LLMs are used only for extraction edge cases and long-tail fallback — never on the clinical path.
We cover the architecture in depth in our engineering post — Why Vector RAG Fails for Oncology — and What to Build Instead.
Ingestion and integration
The engine ingests VCF, FHIR R4 genomics bundles, and structured or scanned PDF reports from FoundationOne CDx, Caris MI Profile, Tempus xT, Illumina TruSight Oncology, and equivalent panels. A single POST /v1/reports call from your LIMS returns both a structured JSON response and, optionally, a print-ready PDF.
Who this is for
Mid-tier regional diagnostic labs processing 50–500 NGS panels per month. If you're running solid-tumor panels and your oncologist clients are asking for faster, more actionable turnaround, this is the primary capability. Run the economics on the For Labs page.
How UNMIRI actually does this
UNMIRI summarizes NGS reports by extracting structured variant data with AWS Textract and per-lab parsers, querying a knowledge graph built on OncoKB, ClinVar, ClinicalTrials.gov, and openFDA drug labels, and rendering the 2-page cheat sheet through deterministic templates. LLMs assist narrowly at the extraction boundary — never on the clinical path. More on the architecture.