Indeed • 2025 Capstone Project

AI Transparent Hiring

Indeed AI Hiring Tools for Employers

Indeed • 2025 Capstone Project

AI Transparent Hiring

Indeed AI Hiring Tools for Employers

Indeed • 2025 Capstone Project

AI Transparent Hiring

Indeed AI Hiring Tools for Employers

Role Calibration

Extracted job details

Job criteria review

Calibration summary

Smart Screen

P1

P2

Compare tradeoff

Verify candidates

Role Calibration

Extracted job details

Job criteria review

Calibration summary

Smart Screen

P1

P2

Compare tradeoff

Verify candidates

Role Calibration

Extracted job details

Job criteria review

Calibration summary

Smart Screen

P1

P2

Compare tradeoff

Verify candidates

Company
Company

Indeed (Sponsored)

My Role
My Role

Product Manager/Designer

Timeline
Timeline

Jan-Jun, 2025

Responsabilities
Responsabilities

Product Strategy

Visual Design

Prototyping

Design Iterations

User Testings

Product Strategy, Visual Design, Prototyping, Design Iterations, User Testings

Product Strategy, Visual Design, Prototyping, Design Iterations, User Testings

Project Brief
Project Brief

Hiring teams on Indeed often mistrust AI-powered resume screening tools — not due to a lack of talent, but because the system misunderstands what employers are truly looking for. Recruiters can’t see how candidates are selected and have no effective way to guide the AI’s interpretation of their hiring needs.

Our team thus redesigned the top of the hiring funnel on Indeed to bridge this gap. We introduced two features: Role Calibration, which helps employers define and validate hiring intent early, and Smart Screen, which increases trust in AI matching by making decision criteria transparent and traceable. 

Together, these tools give hiring managers and recruiters more control and clarity, thus enabling faster, more confident decisions without relying on black-box automation.

Hiring teams on Indeed often mistrust AI-powered resume screening tools — not due to a lack of talent, but because the system misunderstands what employers are truly looking for. Recruiters can’t see how candidates are selected and have no effective way to guide the AI’s interpretation of their hiring needs.

Our team thus redesigned the top of the hiring funnel on Indeed to bridge this gap. We introduced two features: Role Calibration, which helps employers define and validate hiring intent early, and Smart Screen, which increases trust in AI matching by making decision criteria transparent and traceable. 

Together, these tools give hiring managers and recruiters more control and clarity, thus enabling faster, more confident decisions without relying on black-box automation.

Context

Context

Blackhole of Hiring

Blackhole of Hiring

Employers Struggle to Find High-quality Candidates

Hiring on Indeed today often feels like a black hole. Employers invest significant time and effort, yet still struggle to find high-quality candidates, making “candidate quality” the top driver of negative NPS. This inefficiency not only slows down hiring but also erodes trust in the platform. For Indeed, the stakes are high: if employers can’t rely on the marketplace to deliver strong candidates, they’re less likely to adopt or invest in Indeed as their primary hiring tool.

#1

Low Quality as the Top Driver of Negative NPS

41%

Feedback Flagged Candidate Quality

4000+

Employer Complaints of Low Quality

Job Seekers

350M+ monthly unique visitors
525M+ global job seeker profiles

Employers

3.5M+ employers
30M+ jobs

Indeed
Matching
Engine

Indeed internal data 2023

Opportunity & Gap

Opportunity with GenAI?

With the rise of AI in recent years, Indeed saw the opportunity to use GenAI to bridge the gap of low-quality matches, potentially boosting employer efficiency and unlocking new business value. Therefore, the company decided to reimagine the hiring experience with AI at its core.

Lack of Trust in AI

Indeed had already introduced small AI features like resume summaries and candidate recommendations, but adoption was low: fewer than 20% of active users engaged, and most feedback was negative. Research revealed a clear reason: employers don’t trust these tools. They often prefer manually reviewing resumes over AI summaries, viewing AI as “black boxes.” Without trust, employers simply won’t rely on AI.

Design Problem

Employers struggle with low-quality candidate matches and have no trust in AI hiring tools, making them reluctant to adopt AI in their workflow.

Project Goals

01

Improve Candidate Quality

Leverage GenAI to surface stronger, more diverse talent without introducing new bias.

02

Rebuild Employer Trust

Make AI-assisted tools reliable and transparent so teams feel confident adopting them.

Research

Research

Research Question

Research Question

In order to solve the problem, we first wanted to understand the underlying reasons that drives the low quality and low trust, so our initial research aimed to answer the questions:

  • How do employers define "high quality" candidate?

  • Why do they fundamentally distrust the AI designed to help them?

Literature Reviews

Analyzed 20+ academic studies to identify established findings on the efficiency, ethical, and fairness concerns in AI hiring.

Research reveals a mismatch between AI ranking logic and employer evaluation, with fairness efforts reducing bias only partially and often compromising accuracy.

User Interviews

Conducted interviews with 8 recruiters & hiring managers across org size, focusing on JD creation, candidate matching, and LLM usage.

Research showed that employers won't trust an AI they can't understand or control, demanding transparency and the ability to "calibrate" results above all else.

Found Object

Analyzed 1,000+ Reddit posts from r/IndeedJobs & r/recruiting to surface real-world pain points through thematic coding.

Research reveals that employers struggle to find qualified candidates despite high applicant volume. Many applicants don’t meet basic criteria, forcing quick judgments under time pressure.

Comparative Analysis

Analyzed 4 competitors (LinkedIn, ZipRecruiter, Eigthfold.ai, and Tezi) to benchmark Indeed's position and identify opportunities.

Research highlighted that while Indeed's strength is its massive scale, competitors are finding success by focusing on niche automation and deep learning tools.

Synthesized Insights

01

Lack of Transparency

The need for transparency is a critical factor for building trust. To feel confident, users must be able to see why a candidate was recommended and feel the AI's logic aligns with their own requirements.

02

Lack of Control

The need of control is also critical for employers to trust AI. Currently, they have no way to adjust or correct results, leaving them forced to accept mismatches or disregard the system entirely.

03

Lack of Understanding

Employers often start with an ideal candidate profile, but their requirements evolve over time. AI struggles to capture these nuances like transferable skills or flexible expectations, so employers think AI can't fully understand their intents.

Identify Opportunities

Identify Opportunities

Break Down the Flow

Mistrust doesn’t live in the abstract, it shows up in specific ways throughout the hiring flow. To understand how these three insights impact candidate quality and where other factors come into play, we stepped back and looked at the entire employer journey on Indeed.

Indeed divided it into five major chunks: Define, Find, Screen, Offer, and Onboard. Since our challenge was really about quality of candidates at the top of the funnel, we zoomed into the first two stages.

Funnel Analysis

We broke the first two stages down into five critical sections: Job Creation, Matching, Evaluation, Verification, and the Overall Workflow, and analyzed each in depth. To guide our opportunity finding, we also labeled whether each cause stemmed from human issues or system issues.

OPPORTUNITY 1

Job Creation

Bridge the misalignment between employers needs and system interpretations.

Problems

Misalignment between how employers expressed their requirements in JD, and how the system interpreted them.

User Needs

Articulate employers’ needs so users can trust the system to surface relevant candidates

DESIGN QUESTION 01

How might we help employers articulate their needs better?

DESIGN QUESTION 01

How might we help employers articulate their needs better?

DESIGN QUESTION 01

How might we help employers articulate their needs better?

OPPORTUNITY 2

Matching and Evaluation

Align AI matching with employer intent, build transparency, and give control.

Problems

Employers lack visibility and control in matching, fear AI may overlook transferable skills, and find AI summaries too vague to trust.

User Needs

Employers need to narrow candidate pools with clear reasoning, see transferable skills, and get evaluations backed by concrete evidence.

DESIGN QUESTION 02

How might we give employers transparency and control over matches?

DESIGN QUESTION 02

How might we give employers transparency and control over matches?

DESIGN QUESTION 02

How might we give employers transparency and control over matches?

Design Mission

Design Mission

How might we empower teams to make faster, more‑equitable hiring decisions while giving them clarity, control, and confidence in AI matching?

How might we empower teams to make faster, more‑equitable hiring decisions while giving them clarity, control, and confidence in AI matching?

FEATURE 1

FEATURE 1

Role Calibration

Role Calibration

Concept

Recruiters often struggle to articulate what they’re truly looking for in a hire. Job descriptions are vague, inconsistent, or overloaded with boilerplate — making it hard for AI tools to interpret their intent. We designed Role Calibration to guide employers through defining core criteria: baseline qualifications, soft preferences, and red flags. This creates a structured, consistent input for AI matching, instead of relying on loosely written job posts.

Why important?

By capturing hiring intent upfront, Role Calibration reduces the gap between employer expectations and AI matching output. Employers can validate how the system interprets their inputs in real time, adjust the match logic, and build alignment across hiring teams before the job is posted. This improves candidate quality from the start and builds trust in the AI screening process.

Extracted Job Details

Extracted Job Details

Structure Input Before AI Matching  

Structure Input Before AI Matching  

Employers upload job materials — JDs, recruiter notes, meeting docs — and get a structured preview of extracted criteria. Each point is traceable back to the source, enabling quick edits before the AI matching begins.

Criteria Review

Calibration Summary

Calibration Summary

Preview and Adjust Hiring Priorities

Preview and Adjust Hiring Priorities

Employers fine-tune how much each factor matters, like experience, skills, or team fit, to reflect their ideal candidate profile. A real-time preview helps them see the impact on candidate pool size and alignment before moving forward.

User Validation

We tested the flow with recruiters and hiring managers across three rounds. An early version used a radar chart to preview candidate profiles — some found it intuitive, but most found it unclear and hard to generalize across roles. This was one of several insights that led us to redesign the flow into a simpler, step-by-step structure with editable summaries and live previews. In the end, users felt the revised experience was clearer, more practical, and better aligned with real hiring decisions.

FEATURE 2

FEATURE 2

Smart Screen

Smart Screen

What it is?

Smart Screen helps recruiters understand and evaluate why candidates are surfaced by the system. It builds on Role Calibration by showing which signals are used and how filters are applied. Users can track how candidates are screened out at each step, compare matched profiles, and add missing context like verified credentials to improve accuracy.

Why important?

Recruiters often hesitate to trust AI screening due to lack of visibility and control. Smart Screen addresses this by making the matching logic transparent and editable. It helps users move beyond black-box ranking by combining system-driven insights with human judgment, leading to a faster, more confident decision making.

Find Top Matches with AI

Find Top Matches with AI

Define Matching Logic & Output

Define Matching Logic & Output

This step sets how the system screens and ranks candidates. It combines Role Calibration data with prompt-based preferences and bias guardrails to guide AI behavior. Recruiters can also define the output and label them by match strength for review and action.

AI Thinking

AI Thinking

After inputs are defined, Smart Screen shows how the system plans to use them: what signals it’s prioritizing, which filters it will apply, and in what order. This view helps recruiters understand the AI’s decision logic before matching begins.

Task Progress

Task Progress

Once AI thinking finished, recruiters can track how many candidates are filtered at each stage. If certain filters remove too many candidates, they can go back and adjust matching logic to keep the process aligned.

Understand Why They’re a Match

Understand Why They’re a Match

Each candidate includes an AI-generated summary explaining why they were matched, based on the signals set by the employer. Recruiters can click into each point to trace it back to the original resume, making it easier to validate insights quickly.

Compare Matches

Compare Matches

Compare multiple strategies

Each candidate includes an AI-generated summary explaining why they were matched, based on the signals set by the employer. Recruiters can click into each point to trace it back to the original resume, making it easier to validate insights quickly.

Verify Candidates

Verify Candidates

Connect external sources

Recruiters can verify key resume signals, ike job titles, companies, and skills, by pulling in public web data. This step helps confirm AI assumptions, flag inconsistencies, and strengthen confidence before moving candidates forward.

User Validation

Through testing, we saw that giving users a clear view of how the system thinks made a big difference. Being able to see what filters were applied and change them helped recruiters feel more in control, even if they didn’t fully trust the AI at first. Small things like showing how many candidates were cut at each step helped make the process feel understandable and less like a black box.

FEATURE 3

FEATURE 3

Resume Citation

What it is?

Resume Citation links AI-generated insights to exact parts of the candidate’s resume. Recruiters can see why someone was matched and review the supporting text directly. If a candidate is rejected, the user can select a reason, which feeds back to improve the AI.

Why important?

It makes the AI’s logic easy to understand and verify. Recruiters know where each insight comes from and can give feedback when the match feels off. This helps improve trust and match quality over time.

Product Video

3mins • A quick recap of the design, from problem to solution.

Impacts & Learnings

Impacts & Learnings

The Impacts

We presented our findings and prototype to the Indeed UX team, alongside other industry experts from companies like Google and Amazon, and received strong recognition for the clarity, depth, and strategic relevance of our work.

Our approach of “Role Calibration” concept was highlighted as a thoughtful and scalable design pattern in Indeed internal team. Team members mentioned it as a valuable reference point for future explorations into trust, transparency, and employer-side AI tooling.

Strategically, the project aligned well with Indeed’s broader vision and prompted reconsideration of AI’s role in candidate matching beyond chat-based interfaces.

Self-reflection

This project wasn’t just about solving a design problem, it was a learning journey that challenged how I think, collaborate, and make decisions. Working on an end-to-end, 0 to 1 feature taught me how to navigate ambiguity, balance tradeoffs, and design with AI in mind where logic and human trust all intersect.

Along the way, I learned a lot from my teammates, mentors, and users through feedback, pushback, and unexpected turns. Here are a few key takeaways that have stuck with me from this experience.

Designing for Clarity

Building trust in AI isn’t just about accuracy, it’s about how explainable and transparent the system feels. Making logic visible, editable, and traceable became just as vital as what the system actually did.

Defensible Tradeoffs

This project showed me that every design decision is a tradeoff. When constraints tightened, rooting choices in user feedback and root cause can help the team align faster and avoid circular debates.

“0 to 1” Design isn’t Linear

Creating something from scratch meant constantly going back, rethinking assumptions, and revisiting earlier decisions. It was less about polishing one idea, and more about navigating uncertainty with structure and intent.

Rethinking for AI

I had to move beyond static flows and consider dynamic systems, like how recruiter inputs shape outcomes, and how logic and ambiguity are interpreted. Designing for AI meant designing for both humans and machines.

Shout-outs

Shout-outs

Huge thanks to the Indeed UX team, especially Meghna and Mark, for their candid feedback, product insights, and for pushing us to explore bold ideas with real-world impact.

Endless appreciation for Proud, Jessamine, and Hongdi for such a thoughtful, sharp, and collaborative team! Your curiosity and care shaped every part of this project.