Indeed • 2025 Capstone Project

AI Transparent Hiring

Indeed AI Hiring Tools for Employers

Role Calibration

Extracted job details

Job criteria review

Calibration summary

Smart Screen

P1

P2

Compare tradeoff

Verify candidates

Company

Indeed (Sponsored)

My Role

Product Manager/Designer

Timeline

Jan-Jun, 2025

Responsabilities

Product Strategy

Visual Design

Prototyping

Design Iterations

User Testings

Project Brief

Hiring teams on Indeed often mistrust AI-powered resume screening tools — not due to a lack of talent, but because the system misunderstands what employers are truly looking for. Recruiters can’t see how candidates are selected and have no effective way to guide the AI’s interpretation of their hiring needs.

Our team thus redesigned the top of the hiring funnel on Indeed to bridge this gap. We introduced two features: Role Calibration, which helps employers define and validate hiring intent early, and Smart Screen, which increases trust in AI matching by making decision criteria transparent and traceable. 

Together, these tools give hiring managers and recruiters more control and clarity, thus enabling faster, more confident decisions without relying on black-box automation.

Background

About Capstone

This two‑quarter capstone project was an industry collaboration with Indeed. Our brief—“Equitable and efficient hiring with AI support”—came with clear guardrails from the sponsor: focus on employer‑side workflows, keep GenAI at the core, avoid chatbot‑only interactions, and ensure every concept preserves employer trust and human‑in‑the‑loop oversight.

We met with Indeed’s product and UX mentors every other week; they helped us validate what was in‑scope, what was out‑of‑bounds, and steered us toward the highest‑impact opportunities while still giving us room to shape the problem, conduct independent research, and iterate on design.

AI-Assisted Hiring

Today, traditional hiring struggles to keep pace with the shifting demands, AI offers new opportunity to provide more diverse talent pools and faster screening. However, most systems operate as opaque black boxes. Hiring managers distrust rankings they can’t audit, fear bias hidden in training data, and often revert to manual review. Thus, our north-start question become:

How might AI empower teams to make faster, more‑equitable hiring decisions without sacrificing visibility or control?

Indeed's Gaps

Low quality despite massive reach of candidates

Even with over 350 million monthly active users and 525 million job seeker profiles, Indeed found employers still struggles to find high-quality candidates. Indeed’s own data shows ~60 % of applicants match JD requirements, yet “poor candidate quality” remains the #1 NPS detractor, revealing a trust & transparency gap rather than a supply problem.

Low trust in AI tools slows hiring

Indeed’s current AI summaries and match recommendations are seen as generic and unverifiable, so employers fall back to manual resume scans, which adds time, erodes trust, and masks the value of Indeed’s data advantage.

Job Seekers

350M+ monthly unique visitors
525M+ global job seeker profiles

Employers

3.5M+ employers
30M+ jobs

Indeed
Matching
Engine

Indeed internal data 2023

Stakeholders

Roles

Responsibility

Core Concerns

Core Goals

Hiring Managers

Final decision maker

Fill roles with high‑quality, contextually relevant talent

Need clarity on why a candidate is surfaced; want flexibility to override AI

Recruiters

Frontline resume screeners

Screen at speed & scale

Balance efficiency with fairness; avoid false negatives

Project Goals

01

Improve quality & equity

Leverage GenAI to surface stronger, more diverse talent without introducing new bias.

02

Rebuild employer trust

Make AI-assisted tools reliable and transparent so teams feel confident adopting them.

03

Accelerate decisions

Shorten the time from job posting to shortlist by streamlining review with clearer AI support.

Research

Research Methods

In order to solve the problem, we first needed to move beyond the business metrics and understand the underlying human truths driving the low quality, low trust, and slow decisions. Our initial research aimed to answer the questions:

  • Why do employers perceive candidate quality as low?

  • Why do they fundamentally distrust the AI designed to help them?

Literature Reviews

Analyzed 20+ academic studies to identify established findings on the efficiency, ethical, and fairness concerns in AI hiring.

Research reveals a mismatch between AI ranking logic and employer evaluation, with fairness efforts reducing bias only partially and often compromising accuracy.

User Interviews

Conducted interviews with 8 recruiters & hiring managers across org size, focusing on JD creation, candidate matching, and LLM usage.

Research showed that employers won't trust an AI they can't understand or control, demanding transparency and the ability to "calibrate" results above all else.

Found Object

Analyzed 1,000+ Reddit posts from r/IndeedJobs & r/recruiting to surface real-world pain points through thematic coding.

Research reveals that employers struggle to find qualified candidates despite high applicant volume. Many applicants don’t meet basic criteria, forcing quick judgments under time pressure.

Comparative Analysis

Analyzed 4 competitors (LinkedIn, ZipRecruiter, Eigthfold.ai, and Tezi) to benchmark Indeed's position and identify opportunities.

Research highlighted that while Indeed's strength is its massive scale, competitors are finding success by focusing on niche automation and deep learning tools.

Synthesized Insights

HUMAN TRUTH 1

Employer needs to know the why behind AI decisions

We found that employers are unwilling to delegate hiring tasks to a "black box" AI. Across our research, the need for transparency and control was the most critical factor for building trust. To feel confident, users must be able to see why a candidate was recommended and feel that the AI's logic aligns with their own requirements.

HUMAN TRUTH 2

Employers values a candidate's potential over a perfect match

Recruiters and hiring managers feel their expertise lies in identifying candidates with potential and transferable skills, not just those who are a 100% keyword match. They worry that AI over-relies on keyword matching and would filter out high-quality, non-traditional candidates.

HUMAN TRUTH 3

Employers have a complex “ideal candidate” in mind

Hiring teams are inundated with hundreds of applications for each role, many from unqualified candidates. This sheer volume forces them to spend mere seconds on each resume, increasing the risk of missing top talent.

Defining Opportunities

Root Cause Analysis

We performed a root cause analysis for each opportunity and we break down the problem by identifying the causes of the problem “lack of quality and speed” across the hiring funnel including job creation, matching, evaluation, interview, and the review of the overall workflow. We further categorize them as either a human issue (user behavior) or system issue (platform limitation). We decided to dig into the human issue first to find the opportunity for our initial ideation.

Solving for Human Factors

The analysis led to a powerful insight: while system limitation exist, many critical breakdowns were rooted in human factors. Our strategy was to focus on solving the human-centered problem first. This led to a targeted ideation session to generate solutions across the funnel.

Brainstorming workshop

Starting where it breaks

We chose to focus on the top of the funnel—where roles are defined and initial matches are surfaced—because that’s where most of the breakdowns start. Recruiters told us they often don’t trust AI results, not because there aren’t good candidates, but because the system never really understood what they were looking for to begin with.

If we could improve how hiring intent is captured and how early matches are explained and adjusted, we believed we could unblock the rest of the process. That meant rethinking not just who gets surfaced, but how and why.

Design Mission

How might we empower teams to make faster, more‑equitable hiring decisions while giving them clarity, control, and confidence in AI matching?

FEATURE 1

Role Calibration

Concept

Recruiters often struggle to articulate what they’re truly looking for in a hire. Job descriptions are vague, inconsistent, or overloaded with boilerplate — making it hard for AI tools to interpret their intent. We designed Role Calibration to guide employers through defining core criteria: baseline qualifications, soft preferences, and red flags. This creates a structured, consistent input for AI matching, instead of relying on loosely written job posts.

Why important?

By capturing hiring intent upfront, Role Calibration reduces the gap between employer expectations and AI matching output. Employers can validate how the system interprets their inputs in real time, adjust the match logic, and build alignment across hiring teams before the job is posted. This improves candidate quality from the start and builds trust in the AI screening process.

Extracted Job Details

Structure Input Before AI Matching  

Employers upload job materials — JDs, recruiter notes, meeting docs — and get a structured preview of extracted criteria. Each point is traceable back to the source, enabling quick edits before the AI matching begins.

Criteria Review

Calibration Summary

Preview and Adjust Hiring Priorities

Employers fine-tune how much each factor matters, like experience, skills, or team fit, to reflect their ideal candidate profile. A real-time preview helps them see the impact on candidate pool size and alignment before moving forward.

User Validation

We tested the flow with recruiters and hiring managers across three rounds. An early version used a radar chart to preview candidate profiles — some found it intuitive, but most found it unclear and hard to generalize across roles. This was one of several insights that led us to redesign the flow into a simpler, step-by-step structure with editable summaries and live previews. In the end, users felt the revised experience was clearer, more practical, and better aligned with real hiring decisions.

FEATURE 2

Smart Screen

What it is?

Smart Screen helps recruiters understand and evaluate why candidates are surfaced by the system. It builds on Role Calibration by showing which signals are used and how filters are applied. Users can track how candidates are screened out at each step, compare matched profiles, and add missing context like verified credentials to improve accuracy.

Why important?

Recruiters often hesitate to trust AI screening due to lack of visibility and control. Smart Screen addresses this by making the matching logic transparent and editable. It helps users move beyond black-box ranking by combining system-driven insights with human judgment, leading to a faster, more confident decision making.

Find Top Matches with AI

Define Matching Logic & Output

This step sets how the system screens and ranks candidates. It combines Role Calibration data with prompt-based preferences and bias guardrails to guide AI behavior. Recruiters can also define the output and label them by match strength for review and action.

AI Thinking

After inputs are defined, Smart Screen shows how the system plans to use them: what signals it’s prioritizing, which filters it will apply, and in what order. This view helps recruiters understand the AI’s decision logic before matching begins.

Task Progress

Once AI thinking finished, recruiters can track how many candidates are filtered at each stage. If certain filters remove too many candidates, they can go back and adjust matching logic to keep the process aligned.

Understand Why They’re a Match

Each candidate includes an AI-generated summary explaining why they were matched, based on the signals set by the employer. Recruiters can click into each point to trace it back to the original resume, making it easier to validate insights quickly.

Compare Matches

Compare multiple strategies

Each candidate includes an AI-generated summary explaining why they were matched, based on the signals set by the employer. Recruiters can click into each point to trace it back to the original resume, making it easier to validate insights quickly.

Verify Candidates

Connect external sources

Recruiters can verify key resume signals, ike job titles, companies, and skills, by pulling in public web data. This step helps confirm AI assumptions, flag inconsistencies, and strengthen confidence before moving candidates forward.

User Validation

Through testing, we saw that giving users a clear view of how the system thinks made a big difference. Being able to see what filters were applied and change them helped recruiters feel more in control, even if they didn’t fully trust the AI at first. Small things like showing how many candidates were cut at each step helped make the process feel understandable and less like a black box.

FEATURE 3

Resume Citation

What it is?

Resume Citation links AI-generated insights to exact parts of the candidate’s resume. Recruiters can see why someone was matched and review the supporting text directly. If a candidate is rejected, the user can select a reason, which feeds back to improve the AI.

Why important?

It makes the AI’s logic easy to understand and verify. Recruiters know where each insight comes from and can give feedback when the match feels off. This helps improve trust and match quality over time.

Impacts & Learnings

The Impacts

We presented our findings and prototype to the Indeed UX team, alongside other industry experts from companies like Google and Amazon, and received strong recognition for the clarity, depth, and strategic relevance of our work.

Our approach of “Role Calibration” concept was highlighted as a thoughtful and scalable design pattern in Indeed internal team. Team members mentioned it as a valuable reference point for future explorations into trust, transparency, and employer-side AI tooling.

Strategically, the project aligned well with Indeed’s broader vision and prompted reconsideration of AI’s role in candidate matching beyond chat-based interfaces.

Self-reflection

This project wasn’t just about solving a design problem, it was a learning journey that challenged how I think, collaborate, and make decisions. Working on an end-to-end, 0 to 1 feature taught me how to navigate ambiguity, balance tradeoffs, and design with AI in mind where logic and human trust all intersect.

Along the way, I learned a lot from my teammates, mentors, and users through feedback, pushback, and unexpected turns. Here are a few key takeaways that have stuck with me from this experience.

Designing for Clarity

Building trust in AI isn’t just about accuracy, it’s about how explainable and transparent the system feels. Making logic visible, editable, and traceable became just as vital as what the system actually did.

Defensible Tradeoffs

This project showed me that every design decision is a tradeoff. When constraints tightened, rooting choices in user feedback and root cause can help the team align faster and avoid circular debates.

“0 to 1” Design isn’t Linear

Creating something from scratch meant constantly going back, rethinking assumptions, and revisiting earlier decisions. It was less about polishing one idea, and more about navigating uncertainty with structure and intent.

Rethinking for AI

I had to move beyond static flows and consider dynamic systems, like how recruiter inputs shape outcomes, and how logic and ambiguity are interpreted. Designing for AI meant designing for both humans and machines.

Shout-outs

Huge thanks to the Indeed UX team, especially Meghna and Mark, for their candid feedback, product insights, and for pushing us to explore bold ideas with real-world impact.

Endless appreciation for Proud, Jessamine, and Hongdi for such a thoughtful, sharp, and collaborative team! Your curiosity and care shaped every part of this project.