Anomaly Detection on KPA Actuals
When a Maker submits a value that's far outside the department's normal range, MOHIT raises a warning before it ever reaches the Checker. Critical anomalies require an explicit override.
MOHIT pairs lightweight statistical models with a frontier language model (Claude Sonnet 4.6) to remove busywork from Makers and Checkers — without ever taking the final decision out of human hands.
When a Maker submits a value that's far outside the department's normal range, MOHIT raises a warning before it ever reaches the Checker. Critical anomalies require an explicit override.
Instead of starting from a blank form, the Maker sees a recommended value drawn from their department's recent history — with a confidence band so they know how much to trust it.
When a Checker rejects a submission, MOHIT drafts the review comment for them — clear, specific, and respectful in tone. The Checker can edit any word before it reaches the Maker.
MOHIT reads the actual numbers out of uploaded proofs — PDFs and images — and silently compares them to what the Maker reported. Mismatches above 5% are flagged for the Checker.
Mid-cycle, MOHIT projects whether each department will hit its monthly target. If the projection falls below 90%, the lagging Makers get a polite in-app nudge — long before the deadline slips.
Every AI feature is graded weekly: how often suggestions were accepted, how often Checkers overrode the model, where the model was wrong. You always know whether the AI is earning its keep.
Auto-fill suggests a likely value with a confidence band.
Anomaly check runs inline; outliers are flagged.
Document AI extracts figures and compares against the report.
If returning, AI drafts the remark; Checker edits and sends.
Deadline-risk nudges fire; weekly eval scores every feature.
AI suggests; people decide. Makers and Checkers retain the final say on every value, remark and acceptance.
Every suggestion, flag and override is persisted with model version and confidence in AISuggestion and AIFlag tables.
If the LLM is unreachable, deterministic templates and statistical baselines take over. The product never stops working.
No personal data is sent to the LLM. Only the structured KPA context and a redacted style guide leave the perimeter.
The built-in AI evaluation dashboard reports acceptance, override and accuracy weekly — so you always know what's earning its keep.
Any capability can be disabled per-tenant via settings. Models, thresholds and prompts are config-driven, not hard-coded.