Sutra — User Journey Map
Talent Intelligence Layer · Recruiter + Hiring Manager perspectives
⬡ Prototype ↗ Journey Map
Version 0.2 · March 2026
Interactive prototype · Airbnb internal
Two talent pools lost every day
Most recruiting tools are transaction processors — they record hiring decisions but don't reason across them. Sutra sits on top of existing ATS infrastructure to recover two categories of value that currently expire silently.
🏢

Internal talent goes unnoticed

High performers on adjacent teams are never surfaced to hiring managers opening new roles. The HM defaults to external search — incurring cost, ramp time, and the risk of losing the internal employee entirely when they feel overlooked.

📂

Strong candidates expire prematurely

External candidates who didn't get an offer — due to timing, headcount, or level mismatch — are abandoned in ATS limbo with no re-engagement path. A candidate strong enough to enter a pool in Year 1 is often still relevant in Year 3.

The gap: Current tools (Greenhouse, Lever, Workday) record hiring decisions but don't reason across them. Sutra closes that gap — not by replacing the ATS, but by making it smarter about who you already know. Think of it as: "Greenhouse is your transaction processor. Sutra is your talent memory."
Two roles, two scopes
Sutra has two user types with deliberately different access levels. The asymmetry is intentional — churn risk is sensitive personnel data that should never reach a hiring manager's view.
👩‍💼

Recruiter

Full access · Owns the full pipeline
Goal Fill roles faster with less regrettable attrition
Pain today Manually building context across ATS, spreadsheets, and email
Signal needed Who's ready to advance, who's going cold, who might leave
Screen access
✓ Open Roles ✓ Role Breakout ✓ Candidate Packet ✓ At Risk ✓ Notifications ✓ Settings
👨‍💻

Hiring Manager

Scoped access · Owns their roles only
Goal Hire the right person for their team, with minimal friction
Pain today Slow feedback loops with recruiting, unclear pipeline status
Signal needed Who to interview next, what's waiting on them
Screen access
✓ Open Roles (scoped) ✓ Role Breakout ✓ Candidate Packet At Risk Churn scores Settings
From portfolio triage to candidate decision
A recruiter managing 10–15 open roles across multiple hiring managers. The journey below traces a single morning triage session that surfaces a role in need of attention.
👩‍💼
Jamie Reyes · Senior Recruiter, Tech
Managing 12 open roles · Airbnb HQ · 4 years recruiting
1
Morning triage

Jamie opens Sutra at the start of the day. Rather than digging through Greenhouse filters, they get an org-level snapshot: 25 of 47 open reqs are tracked in Sutra, 547 employees match at least one open role, 6 roles are stalled, and 312 employees are flagged as at-risk. The headline copy is deliberately editorial — it names the action, not just the number.

Doing

Scanning the dashboard for signals that require same-day action. Looking for stalled roles and roles with internal candidates.

Sutra provides

Consolidated org view across all reqs. Stalled roles and at-risk employees are surfaced immediately — triage starts at the problem, not a blank list.

Watch for

Does "25 of 47 in Sutra" create confusion about the other 22 roles? Incomplete coverage may undermine trust in the org numbers.

sutra · Open Roles
Open Roles — portfolio homepage
Open live ↗
2
Drill into a role

Jamie clicks into Product Manager — Trust & Safety (47 days open, Needs Attention). The role breakout loads with a pipeline funnel at the top and a prioritized action band below. An offer is already extended to Jordan Kim — the teal card anchors the band as the most contextually significant item. Three candidates need decisions today.

Doing

Reading the action band to understand what decisions are theirs vs. what's waiting on the hiring manager or candidate. Checking the offer status in the funnel.

Sutra provides

Action band surfaces exactly what needs a decision today — the extended offer, a feedback gap, and a candidate going cold — without the recruiter having to build this picture manually across tabs.

Watch for

Does the visual hierarchy (teal offer card vs. red urgency cards) clearly communicate priority order? The offer card is informational — the red cards require action.

sutra · PM — Trust & Safety
Role Breakout — PM, Trust & Safety
Open live ↗
3
Review the candidate pool

Scrolling past the action band, Jamie reviews all 7 candidates. They use the filter tabs to pivot between views — toggling "Sutra Recommended" to see the 7 above the 75% match threshold, and "Internal" to focus on the 3 employees already at Airbnb. Internal candidates are marked with an orange prefix so they're scannable at a glance.

Doing

Scanning match scores, churn risk chips, and stage labels. Using filter tabs to narrow focus. Noting which candidates are internal vs. external without opening each packet.

Sutra provides

"Sutra Recommended" makes AI filtering explicit and opt-in — the recruiter is never silently shown a curated view. Internal candidates are labeled inline so the recruiter sees mobility opportunity without switching screens.

Watch for

7 candidates for a role with 847 applicants — will the recruiter trust that threshold? The match score threshold (75%) is configurable in Settings, but discoverability matters.

sutra · PM — Trust & Safety · Internal (3)
Candidate pool — Internal filter active
Open live ↗
4
Deep-dive on a candidate

Jamie clicks into Mia Tanaka — an internal L3 PM at Airbnb with a 91% match score and 3 champions. The candidate packet shows a Sutra AI Summary, always-visible AI flags, a validated skills breakdown, and endorsements. Mia has High Churn risk and is simultaneously active in two role instances — both facts are disclosed explicitly.

Doing

Reading the AI summary, checking which skills are validated vs. self-reported, reviewing champion endorsements, and deciding whether to advance Mia to the next stage.

Sutra provides

Validated skills are always distinguished from inferred ones. AI flags are never suppressed. Multi-role consideration is disclosed. The recruiter can make a calibrated decision — not one based on algorithm output alone.

Watch for

The AI summary is dense prose. Under time pressure, recruiters may skip it and rely only on the match score — which inverts the intended validation hierarchy. Key signals should be skimmable in <10 seconds.

sutra · Mia Tanaka · Candidate Packet
Candidate Packet — Mia Tanaka
Open live ↗
5
Proactive retention check

Before wrapping up, Jamie visits the At Risk tab — recruiter-only, never visible to HMs. 312 employees are flagged org-wide; 16 match open roles right now. Mia Tanaka appears at the top — already in the PM pipeline but at high churn risk with 32 months without promotion. Jamie can now coordinate an internal move before Mia starts looking externally.

Doing

Scanning at-risk employees who match open roles. Identifying Mia — already in the pipeline — as a candidate where internal mobility is the right retention lever.

Sutra provides

At Risk surfaces churn signals before they become exits. Each card shows matched open roles with match scores, so the recruiter can act on retention and hiring simultaneously without switching tools.

Watch for

312 flagged employees is a large number to triage. The current list sorts by match but offers no churn severity ranking. Recruiters may need a "most urgent" sort to act on the right employees first.

sutra · At Risk · 312 flagged
At Risk — recruiter-only view
Open live ↗
6
Calibrating the thresholds

Every number surfaced in the journey above is configurable. The recruiter — or their team lead — visits Settings to tune the signal sensitivity to their organization's norms. These are organization-level settings, not per-recruiter, so changes affect the entire team's view. The defaults reflect reasonable starting points, but orgs with lower bar-to-hire or higher attrition environments will need to adjust.

Doing

Reviewing the thresholds that drive all AI signals across the product. Adjusting match score or churn risk cutoffs after observing real pipeline results.

Sutra provides

Full transparency into every tunable parameter. No AI signal is a black box — recruiters can see exactly what score drives what surface, and change it.

Watch for

Settings are organization-level in v1 — one recruiter changing the match threshold affects everyone. Is that the right default? Teams with mixed hiring bars may need per-role overrides.

Where each threshold appears in this journey
75%
Match score threshold
→ Steps 2 & 3: drives "Sutra Recommended" filter · filters 847 applicants to 7 candidates above threshold
70%
Churn risk threshold
→ Steps 4 & 5: determines who appears in At Risk · drives "High Churn" chip on Mia Tanaka's candidate packet
18 mo
Tenure gap signal
→ Step 5: Mia's 32 months without promotion triggers the churn flag · threshold controls when the clock starts
Never
Talent pool expiration
→ Affects all steps: external candidates surfaced via "Endorsed" filter may have entered the pool years ago · never silently removed
sutra · Settings
Settings — configurable signal thresholds
Open live ↗
Scoped view, same intelligence
The HM sees only their own role instances. No At Risk tab. No churn scores. Otherwise the same action band and candidate experience — designed so HMs get the context they need without accessing sensitive personnel data that should stay with recruiting.
👨‍💻
Priya Sharma · Senior Director, Trust & Safety PM
Hiring for 1 open role · PM — Trust & Safety (L4)
1
Scoped pipeline view

Priya opens Sutra and lands directly on her role — no open roles list, no org-wide stats. She sees the same action band and candidate table as the recruiter, but churn risk chips are hidden and the At Risk tab is absent from navigation. The HM banner reminds her of her access scope.

Doing

Checking what's pending on her side — specifically which candidates are waiting on her for feedback or scheduling. Acting on the items in the action band.

Sutra provides

Clear "waiting on hiring manager" labels in the action band tell Priya exactly what she owns. She doesn't need to ask recruiting for status — the pipeline ownership is visible.

Watch for

HMs may not immediately understand they're in a scoped view. The absence of the At Risk tab and certain data fields needs to feel intentional, not broken. The HM banner helps — but should it be more prominent?

sutra · PM — Trust & Safety · HM view
Role Breakout — Hiring Manager view
Open live ↗
Design principles that must hold
These are product commitments, not preferences. Every design decision should be evaluated against them.
🔍

No hallucinations

Every AI output must be source-traceable. No inference is displayed without an auditable signal. Self-reported skills are always labeled as such.

🤝

Human in the loop

Sutra never takes autonomous action on a candidate. No auto-advance, no auto-reject, no auto-outreach. Every action requires a deliberate human click.

👁

Candidate transparency

Multi-role consideration is always disclosed on the candidate packet. Candidates see when they're being considered for more than one role.

Validated over inferred

Skill validation status is always visible. Validated skills are visually distinguished from inferred ones. Decisions should weight validated signals heavily.

🔒

Churn is sensitive

Churn risk data is recruiter-only. It is never surfaced to hiring managers — doing so could create awkward dynamics or bias performance reviews.

♾️

Talent pools don't expire

Candidates in the pool remain indefinitely unless explicitly archived or opted out. The value of the talent graph compounds over time.

Open questions for this prototype
These are the tensions this journey surfaces that need resolution before v1 scoping.
Q1
Does Sutra solve a workflow problem or a data problem — and does the recruiter experience reflect the right one?
The action band is workflow-oriented (what do I do next?). The At Risk tab is data-oriented (who might leave?). Are we trying to do both in v1, or should one be primary?
Q2
Will recruiters trust the AI match score enough to act on "Sutra Recommended" candidates?
A 75% threshold filters 847 applicants down to 7. If the recruiter doesn't trust the score methodology, the filter becomes noise rather than signal. How do we build trust in the score early?
Q3
How does the recruiter manage 312 at-risk employees without a severity ranking?
The current At Risk list is paginated but not urgency-sorted. A recruiter can't act on 312 people — they need a "most urgent this week" view. What signals determine urgency beyond churn score?
Q4
What happens to the action band when the offer is declined?
The current prototype surfaces an offer card and advises keeping other candidates warm. If Jordan Kim declines, the band should reset — but how does that state change get communicated back to Sutra from the ATS?
Q5
Is the HM experience differentiated enough to justify a separate view toggle?
The HM view currently removes At Risk access and churn chips. Is that the right level of differentiation — or should the HM experience have its own information architecture rather than a scoped version of the recruiter's?
Q6
Where does candidate consent and transparency actually live?
The spec calls for candidates to see when they're being considered for multiple roles. This is shown in the recruiter packet but there's no candidate-facing view in v1. What's the interim disclosure mechanism?