A finance director at a 200-person SaaS company told me a story last quarter that still stings. They had 83 vendor contracts tracked in a shared Google Sheet. Six of those contracts auto-renewed before anyone noticed — costing $47,000 in unplanned spend and zero negotiation leverage. One of them was a data enrichment tool that exactly three people used.
This is not unusual. According to Concord's contract management research[2], poor contract management is estimated to cause roughly 40% of contract value leakage — though the actual figure varies widely by organization size and contract complexity. For a company spending $1M–$3M annually on SaaS and infrastructure vendors, that leakage can translate to hundreds of thousands of dollars in missed savings, unclaimed credits, and unfavorable renewals.
The vendor contract expiry radar is a structured agent pattern that continuously monitors your contract master list, extracts critical dates and clauses, and runs three parallel checks — renewal countdown, SLA drift detection, and supplier concentration risk. It turns a static spreadsheet into a living early-warning system.
Why Your Contract Spreadsheet Is a Liability
Static tracking cannot handle the dynamic complexity of modern vendor portfolios.
Every growing company starts tracking vendor contracts the same way: someone creates a spreadsheet with columns for vendor name, annual cost, renewal date, and owner. It works until it doesn't — and it stops working around contract number 40.
The failure mode is not the format. It is the assumption that humans will reliably check a static document at the right time. Notice periods vary from 30 to 120 days. Auto-renewal clauses can trigger silently with price escalation baked in — contracts that renew automatically at a 20% increase are disturbingly common. And SLA credits have claim windows that expire, meaning the money you were owed for last month's outage evaporates if nobody files a ticket.
The real cost is not the sum of individual missed renewals. It is the negotiation leverage you lose. When you discover a contract renewed three days ago, you have zero leverage for the next 12 months. When you discover it 90 days before renewal, you have time to benchmark pricing, evaluate alternatives, and negotiate from a position of strength.
Manual updates that lag behind reality
Single reminder date — usually too late
No visibility into auto-renewal triggers
SLA credits go unclaimed consistently
Supplier concentration risk invisible
Finance discovers overruns after the fact
Continuous monitoring from contract source data
Tiered alerts at 90, 60, and 30 days with notice deadlines
Auto-renewal clauses flagged and escalated proactively
SLA drift detected and credit claims queued monthly
Concentration risk scored per category and function
Finance gets actionable briefs before renewal windows open
The Three-Check Contract Expiry Radar Architecture
Renewal countdown, SLA drift, and concentration risk — running continuously against your vendor data.
The radar starts with a contract parser agent that reads your master list — whether that is a spreadsheet, a Notion database, or a folder of PDF contracts. The parser extracts structured fields: vendor name, contract start/end dates, notice period, auto-renewal clause (yes/no/conditional), SLA commitments, service credit terms, and annual contract value.
From there, three checks run in parallel every week:
Renewal Countdown scans for contracts entering the 90-day, 60-day, or 30-day window before expiry. For each, it calculates the notice deadline (expiry date minus notice period) and flags contracts where the notice window is closing.
SLA Drift Detector compares vendor-reported uptime or performance against contractual SLA commitments. When performance dips below the SLA threshold, it calculates the credit owed and queues a claim.
Concentration Risk Scorer aggregates vendor spend by category and function, flags categories where a single vendor accounts for more than 60% of spend, and identifies functions where a vendor departure would create operational disruption.
Parsing Messy Contract Language with the Vendor Expiry Radar
Real contracts use ambiguous language. The parser needs to handle it.
The hardest part of building a contract expiry radar is not the countdown math — it is the parsing. Vendor contracts are written by lawyers who get paid by the word. Auto-renewal clauses hide inside paragraphs about "term and termination" with phrasing like "unless either party provides written notice of non-renewal no fewer than sixty (60) days prior to the expiration of the then-current term."
The parser prompt needs to handle this ambiguity. You feed it the raw contract text and ask it to extract five specific fields with structured output. Here is the approach that works consistently across messy PDF language:
contract-parser-prompt.yamlsystem: |
You are a contract analysis agent. Extract the following fields
from vendor contracts. When language is ambiguous, note the
ambiguity and provide your best interpretation with confidence.
extraction_fields:
- name: renewal_date
description: "Contract end date or next renewal date"
format: "YYYY-MM-DD"
fallback: "Flag as UNKNOWN — escalate for human review"
- name: notice_period_days
description: "Days before renewal that notice must be given"
format: integer
common_patterns:
- "no fewer than X days prior"
- "at least X days written notice"
- "X calendar days before expiration"
fallback: "Default to 90 days with LOW confidence"
- name: auto_renewal
description: "Whether contract auto-renews"
values: ["yes", "no", "conditional"]
red_flags:
- "Price escalation on renewal (e.g., CPI adjustment)"
- "Renewal term longer than initial term"
- "Evergreen clause with no termination path"
- name: sla_commitments
description: "Uptime guarantees and performance thresholds"
format: "Array of {metric, threshold, credit_terms}"
- name: termination_for_convenience
description: "Whether early termination is allowed"
values: ["yes_with_fee", "yes_no_fee", "no"]Check 1: Renewal Countdown Engine
Tiered alerts at 90, 60, and 30 days — with the notice deadline as the real clock.
Most contract trackers alert on the renewal date itself. That is almost always too late. If your contract has a 60-day notice period and you get an alert on the renewal date, you missed your window two months ago.
The countdown engine flips this. It calculates the notice deadline — the last date you can act — and builds the alert timeline from there. A contract renewing on June 1 with a 60-day notice period has a hard deadline of April 2. The 90-day alert fires on January 2, the 60-day alert on February 1, and the 30-day alert on March 3.
At each tier, the alert includes different context. The 90-day alert says "this is coming — start benchmarking alternatives." The 60-day alert says "decision needed this month." The 30-day alert says "notice must be submitted within 30 days or the contract auto-renews."
| Alert Tier | Trigger | Action Required | Audience |
|---|---|---|---|
| 90-Day | 90 days before notice deadline | Begin vendor benchmarking. Request updated pricing from alternatives. | Procurement lead + budget owner |
| 60-Day | 60 days before notice deadline | Decision: renew, renegotiate, or terminate. Start negotiation if renewing. | Department head + finance |
| 30-Day | 30 days before notice deadline | Final call. Submit non-renewal notice or confirm renewal terms in writing. | VP/C-level + legal |
| OVERDUE | Notice deadline passed | Contract will auto-renew. Document as accepted and schedule for next cycle. | Finance (for budget adjustment) |
Check 2: SLA Drift Detection and Credit Recovery
Your vendors owe you money. You just have not asked for it yet.
Here is something most ops teams do not realize: SLA credits are a contractual right, not a favor. When your cloud provider promises 99.95% uptime and delivers 99.8%, you are owed a credit — typically 10-25% of the monthly fee for that service, though exact terms vary by vendor contract. According to outage impact survey data[4], a significant portion of organizations — roughly half in some surveys — never claim credits after serious outages.
The SLA drift detector runs monthly. It pulls uptime or performance data from your monitoring stack (Datadog, Grafana, CloudWatch — whatever you use), compares it against the SLA thresholds extracted from each contract, and generates a credit claim queue.
For a company spending $500K annually on cloud infrastructure, unclaimed SLA credits can add up meaningfully — some estimates suggest $15K-$30K per year is plausible, depending on vendor uptime performance and contract terms. That is money already owed to you by contractual obligation. The agent does not file the claims automatically — it produces a prioritized list with the vendor name, the SLA miss details, the credit amount owed, and the claim window deadline.
Check 3: Supplier Concentration Risk Scoring
When one vendor owns 70% of a capability, you do not have a vendor — you have a dependency.
Supplier concentration risk is the danger you cannot see on a line-item budget. It shows up when you look at vendors by function rather than by name. You might have 30 vendors, but if one of them handles your authentication, your email infrastructure, and your logging pipeline, you have a single point of failure that touches three critical systems.
The concentration risk scorer categorizes each vendor by the functions they serve (authentication, payments, monitoring, communication, storage, etc.) and calculates concentration percentages. Any category where a single vendor exceeds 60% of spend or capability triggers a review flag.
It also runs a disruption scenario: if this vendor disappeared tomorrow, which systems break? How long until you could migrate? What is the cost of emergency procurement? These are uncomfortable questions, but answering them before a crisis is vastly cheaper than answering them during one.
- 1
Centralize your contract data into a parseable format
typescript// Contract master list schema interface VendorContract { vendorName: string; contractId: string; annualValue: number; startDate: string; // ISO 8601 endDate: string; noticePeriodDays: number; autoRenewal: 'yes' | 'no' | 'conditional'; priceEscalation?: string; slaCommitments: SLACommitment[]; functionalCategories: string[]; owner: string; confidenceScore: number; // 0-1, from parser } - 2
Run the contract parser against your existing documents
bash# Parse all contracts in the vendor-contracts directory for contract in ./vendor-contracts/*.pdf; do bun run parse-contract "$contract" >> contracts.jsonl done # Validate parsed data and flag low-confidence extractions bun run validate-contracts contracts.jsonl --min-confidence 0.7 - 3
Schedule the three checks to run weekly via cron
yaml# .github/workflows/contract-radar.yml name: Contract Expiry Radar on: schedule: - cron: '0 8 * * 1' # Every Monday at 8am jobs: radar: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: bun run radar:renewal-countdown - run: bun run radar:sla-drift - run: bun run radar:concentration-risk - run: bun run radar:compile-brief - 4
Deliver the vendor risk brief to Slack or email
typescript// Compile and deliver the weekly brief const brief = await compileBrief({ renewals: renewalAlerts, slaDrift: slaFindings, concentration: concentrationFlags, }); await deliverBrief(brief, { channel: '#vendor-management', recipients: ['procurement@company.com'], format: 'structured', // RED/AMBER/GREEN sections });
Contract Expiry Radar — Go-Live Checklist
Inventory all active vendor contracts (including shadow IT)
Parse contracts and extract renewal dates, notice periods, auto-renewal clauses
Flag low-confidence extractions for human review
Set up monitoring integrations for SLA comparison (Datadog, CloudWatch, etc.)
Define functional categories for concentration risk scoring
Configure alert tiers and delivery channels (Slack, email)
Run initial full scan and review all CRITICAL findings
Schedule weekly cron job for continuous monitoring
Assign owners for each vendor renewal conversation
Set up quarterly review cadence for radar calibration
We recovered $23,000 in unclaimed SLA credits in the first month after deploying the radar. That alone paid for the entire quarter of tooling costs. But the real win was catching an auto-renewal on a $180K analytics contract with 45 days to spare — enough time to renegotiate and save 30% on the renewal.
How do I handle contracts stored as scanned PDFs without OCR?
Most modern contract parsing tools include OCR as a preprocessing step. If your PDFs are scanned images, run them through an OCR pipeline first (Tesseract or a cloud OCR service), then feed the extracted text to the parser. Flag any OCR'd contracts with a lower confidence score by default since OCR introduces extraction errors.
What if my vendor does not publish uptime data for SLA comparison?
Use your own monitoring data as the source of truth. If you run synthetic monitoring (Pingdom, Uptime Robot, or similar) against the vendor's service endpoints, those measurements are typically accepted for SLA credit claims. Document your monitoring methodology so you can reference it during disputes.
How should I handle multi-year contracts with staggered renewal terms?
Multi-year contracts often have annual SLA review windows and price adjustment dates even if the overall contract does not renew for 3-5 years. Parse these interim dates as separate alert events. A 3-year contract is not 'set and forget' — it likely has annual performance reviews, mid-term renegotiation clauses, and year-over-year price adjustments that all warrant tracking.
Is supplier concentration risk only about spend?
Spend is one axis, but operational dependency is often more important. A vendor might represent only 5% of your total spend but handle a function (like authentication or payment processing) where failure brings down your entire product. Score both spend concentration and operational criticality separately.
Contract Radar Operating Rules
Never auto-send termination notices
The radar informs — it does not act. Non-renewal notices require human sign-off because they carry legal and relationship consequences.
Flag all low-confidence extractions for human review before acting
Parser confidence below 0.7 means the contract language was ambiguous. A human must verify before the data enters the alert pipeline.
Update the master list within 48 hours of any new contract signing
The radar is only as current as its source data. Stale inputs produce dangerous blind spots.
Review concentration risk scores quarterly, not just when alerts fire
Concentration creeps gradually. A vendor that was 40% of a category six months ago might be 65% today after a new product adoption.
- [1]Top IT Vendor Management Challenges in 2025 and How to Solve Them(technologymatch.com)↩
- [2]Contract Management Software: Ineffective Practices and Hidden Costs — Concord(concord.app)↩
- [3]Quarterly Vendor Contract Renewal Forecasting Checklist — Sirion(sirion.ai)↩
- [4]Reclaim SLA — SLA Credit Recovery Resource(reclaimsla.com)↩
- [5]SLA Enforcement: Making SaaS Providers Accountable for Downtime(jchanglaw.com)↩