Bias is real. We make it visible.
People write differently about men and women, about older and younger employees, about Swiss and employees with foreign passports. The Reference Fairness Engine detects these patterns — before they shape a career.
- Five bias axes are checked
- Six industry benchmarks
- No storage of reference texts
Five demographic axes that are often distorted
Gender
Women are more often assessed in references with soft adjectives (engaged, reliable, friendly) — men with competence adjectives (analytical, assertive, strategic). For equal performance.
Age
Older employees often receive "reliable, experienced" as the main assessment instead of concrete impact evidence. Younger ones receive "high potential" — instead of delivered performance.
Nationality
Systematic mention of "language ability" or "adaptation" for employees with a migration background — where the mention is missing for Swiss colleagues.
Personality style
"Introverted" or "reserved" as code for lacking visibility — instead of describing observable behaviour concretely.
Part-time
Part-time employees are often linguistically devalued, their performance is unconsciously compared against full-time output — even if the proportional performance fits.
How the check works
1. Insert reference
You copy the reference text into the checker — no storage, no transfer to third parties outside the analysis.
2. Choose industry
Six prepared industry profiles (retail, finance, pharma, industry, tech, hospitality) provide the comparison context for industry-typical writing patterns.
3. Optional: provide context
You can optionally provide hints about the demographics of the assessed person (anonymously, only for the analysis). This makes the bias check more precise.
4. Read finding
You receive qualitative findings per demographic axis: what was observed, where in the text, and a concrete plain-text suggestion as alternative.
Industry benchmark models
Writing patterns are not the same in every industry. The Fairness Engine uses industry-typical expectations as comparison context, so that "typical particularities" are not falsely flagged as bias — and real distortions stand out better.
Retail
Retail, consumer goods — typically high share of women and part-time.
Finance
Banks, insurance, asset management — typically high share of men in front office.
Pharma
Pharma, life sciences — academic, international, research/compliance focus.
Industry
Industry, mechanical engineering, construction — manual, safety and precision focus.
Tech
IT, software, engineering — young, international talent pool, focus on learning.
Hospitality
Hotels, restaurants — service, stress resistance, language diversity.
What the engine does NOT do
- No quantitative bias statements. We give no percentages, because a serious statistical statement would only be possible with real comparison corpora — and we don't have them.
- No replacement for human judgement. The engine provides hints, not prescriptions. You decide which findings to adopt.
- No assessment of the person. We analyse only the language of the reference, not the assessed person.
- No storage. Texts are passed once for analysis and then discarded — no re-analysis, no history, no export.
Frequently asked questions
Is the Fairness Engine a replacement for human assessment?
Are my reference texts stored?
On what data basis does the industry comparison work?
Can you also detect bias regarding other axes (religion, sexual orientation, disability)?
What does the Fairness Engine cost?
Who developed the Fairness Engine?
Check a reference — free during beta.
Login with your ZeugnisPilot account. If you don't have one yet: registration takes 2 minutes, Starter plan is free.