Reference letters. Clear. Fair. Compliant.
Fairness Engine · Beta

Bias is real. We make it visible.

People write differently about men and women, about older and younger employees, about Swiss and employees with foreign passports. The Reference Fairness Engine detects these patterns — before they shape a career.

  • Five bias axes are checked
  • Six industry benchmarks
  • No storage of reference texts
What we check

Five demographic axes that are often distorted

Gender

Women are more often assessed in references with soft adjectives (engaged, reliable, friendly) — men with competence adjectives (analytical, assertive, strategic). For equal performance.

Age

Older employees often receive "reliable, experienced" as the main assessment instead of concrete impact evidence. Younger ones receive "high potential" — instead of delivered performance.

Nationality

Systematic mention of "language ability" or "adaptation" for employees with a migration background — where the mention is missing for Swiss colleagues.

Personality style

"Introverted" or "reserved" as code for lacking visibility — instead of describing observable behaviour concretely.

Part-time

Part-time employees are often linguistically devalued, their performance is unconsciously compared against full-time output — even if the proportional performance fits.

Four steps

How the check works

1. Insert reference

You copy the reference text into the checker — no storage, no transfer to third parties outside the analysis.

2. Choose industry

Six prepared industry profiles (retail, finance, pharma, industry, tech, hospitality) provide the comparison context for industry-typical writing patterns.

3. Optional: provide context

You can optionally provide hints about the demographics of the assessed person (anonymously, only for the analysis). This makes the bias check more precise.

4. Read finding

You receive qualitative findings per demographic axis: what was observed, where in the text, and a concrete plain-text suggestion as alternative.

Six prepared profiles

Industry benchmark models

Writing patterns are not the same in every industry. The Fairness Engine uses industry-typical expectations as comparison context, so that "typical particularities" are not falsely flagged as bias — and real distortions stand out better.

Retail

Retail, consumer goods — typically high share of women and part-time.

Finance

Banks, insurance, asset management — typically high share of men in front office.

Pharma

Pharma, life sciences — academic, international, research/compliance focus.

Industry

Industry, mechanical engineering, construction — manual, safety and precision focus.

Tech

IT, software, engineering — young, international talent pool, focus on learning.

Hospitality

Hotels, restaurants — service, stress resistance, language diversity.

Methodological honesty

What the engine does NOT do

  • No quantitative bias statements. We give no percentages, because a serious statistical statement would only be possible with real comparison corpora — and we don't have them.
  • No replacement for human judgement. The engine provides hints, not prescriptions. You decide which findings to adopt.
  • No assessment of the person. We analyse only the language of the reference, not the assessed person.
  • No storage. Texts are passed once for analysis and then discarded — no re-analysis, no history, no export.
Fairness Engine FAQ

Frequently asked questions

Is the Fairness Engine a replacement for human assessment?
No. It is a tool that makes systematic linguistic patterns visible — the decision on what to change is yours. The engine delivers findings plus alternatives, not verdicts.
Are my reference texts stored?
No. The text is passed once to ZeugnisPilotAI for analysis (Swiss hosting, without storage at the model provider) and then discarded. There is no history, no export, no re-analysis.
On what data basis does the industry comparison work?
Heuristically: we use ZeugnisPilotAI with a structured prompt that retrieves industry-typical writing patterns from training knowledge and compares them with the submitted text. This is not a statistical model but a qualitative heuristic. Robust for obvious cases, not for quantitative statements.
Can you also detect bias regarding other axes (religion, sexual orientation, disability)?
Currently the five standard axes are covered (gender, age, nationality, personality style, employment level). We are continuously evaluating extensions — the prompt allows an additional "other" axis for extraordinary findings.
What does the Fairness Engine cost?
During the beta phase: free for all logged-in users. Rate limit: 20 checks per 10 minutes per account. Final pricing model will be communicated as soon as the beta phase is complete.
Who developed the Fairness Engine?
rhyno solutions AG, Schaffhausen — the same Swiss company that also develops ZeugnisPilot. ISO/IEC 27001 certified, hosting in Switzerland. More in the Trust Center.

Check a reference — free during beta.

Login with your ZeugnisPilot account. If you don't have one yet: registration takes 2 minutes, Starter plan is free.