MOHIT Managing Outcomes, Human Intelligence & Talent
AI Capabilities · v1.0
Capabilities Brief

Six AI capabilities working quietly behind your operations.

MOHIT pairs lightweight statistical models with a frontier language model (Claude Sonnet 4.6) to remove busywork from Makers and Checkers — without ever taking the final decision out of human hands.

6
AI capabilities
100%
Decisions logged
Human
In the loop, always
Claude
Sonnet 4.6 · Vision

The six capabilities, with live demonstrations

Each card shows the business outcome, a working demo you can interact with, and a short technical note for your engineering reviewers.
01

Anomaly Detection on KPA Actuals

When a Maker submits a value that's far outside the department's normal range, MOHIT raises a warning before it ever reaches the Checker. Critical anomalies require an explicit override.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · Anomaly threshold Normal
120
Drag to change the current month's reported actual.
02

Smart Auto-Fill of Predicted Actuals

Instead of starting from a blank form, the Maker sees a recommended value drawn from their department's recent history — with a confidence band so they know how much to trust it.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · Maker form Confidence ≈ 80%
Last 6 months
142 156 168
Predicted range: 138 – 172
03

AI-Generated Return Remarks

When a Checker rejects a submission, MOHIT drafts the review comment for them — clear, specific, and respectful in tone. The Checker can edit any word before it reaches the Maker.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · Checker workflow AI
Reason for return
Generated draft — editable before send Show fallback
04

Document AI · Evidence Verification

MOHIT reads the actual numbers out of uploaded proofs — PDFs and images — and silently compares them to what the Maker reported. Mismatches above 5% are flagged for the Checker.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · Proof verification No mismatch
Q3 Proof.pdf · page 1
Total customers onboarded: 312
Region: West · Quarter: Q3
Reported by Maker
312
Extracted from proof
312
Values match.
Reported
05

Predictive Deadline-Risk Nudges

Mid-cycle, MOHIT projects whether each department will hit its monthly target. If the projection falls below 90%, the lagging Makers get a polite in-app nudge — long before the deadline slips.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · Mid-cycle projection On track
Day 18 of 30 · 72% complete
Projected month-end: 120%
A "Critical" projection auto-sends a nudge to the lagging Makers.
06

AI Performance Dashboard

Every AI feature is graded weekly: how often suggestions were accepted, how often Checkers overrode the model, where the model was wrong. You always know whether the AI is earning its keep.

Under the hood
$$$ Premium implementation. Full technical details are part of the licensed deployment.
Live · This week Auto-refreshed
Anomaly override rate
4.2%
↓ 0.8 pp w/w
Remark acceptance
88%
↑ 3 pp w/w
Auto-fill kept
71%
↑ 1 pp w/w
Doc extraction success
94%
↑ 2 pp w/w

How a single submission flows through MOHIT

Every AI touchpoint is bracketed by a human decision. AI never commits — it informs.
STEP 01

Maker drafts

Auto-fill suggests a likely value with a confidence band.

STEP 02

Submit

Anomaly check runs inline; outliers are flagged.

STEP 03

Proof verified

Document AI extracts figures and compares against the report.

STEP 04

Checker reviews

If returning, AI drafts the remark; Checker edits and sends.

STEP 05

Cycle close

Deadline-risk nudges fire; weekly eval scores every feature.

Trust, governance and the small print

The non-negotiables that make AI safe to deploy inside performance management.

Human-in-the-loop

AI suggests; people decide. Makers and Checkers retain the final say on every value, remark and acceptance.

Full audit trail

Every suggestion, flag and override is persisted with model version and confidence in AISuggestion and AIFlag tables.

Graceful fallbacks

If the LLM is unreachable, deterministic templates and statistical baselines take over. The product never stops working.

Privacy-respecting

No personal data is sent to the LLM. Only the structured KPA context and a redacted style guide leave the perimeter.

Measured, not assumed

The built-in AI evaluation dashboard reports acceptance, override and accuracy weekly — so you always know what's earning its keep.

Switchable

Any capability can be disabled per-tenant via settings. Models, thresholds and prompts are config-driven, not hard-coded.