Direct Answer
Content validity is the extent to which an assessment measures what it is supposed to measure — specifically, whether the items in a test, questionnaire, or evaluation actually represent the knowledge, skills, and behaviors required for the job. A selection tool has content validity when the things it asks about are demonstrably relevant to the work the person will actually do.
Why It Matters
Consider a reference check that asks, "Is this person a good employee?" That question might seem relevant, but it does not map to any specific job requirement. It does not tell you whether the person can manage deadlines, communicate clearly, or collaborate under pressure. It lacks content validity because its content is not systematically connected to what the job actually demands.
Now consider a reference check that asks a former manager to rate the candidate on specific, job-relevant behaviors — like "How effectively did this person prioritize competing tasks?" or "How well did this person adapt their communication style to different audiences?" These items have content validity because they were derived from an analysis of what the job requires.
The Science Behind It
Content validity is established through job analysis — a systematic process where subject matter experts (SMEs) identify the knowledge, skills, abilities, and other characteristics (KSAOs) required for effective job performance. Assessment items are then designed to represent those KSAOs, and SMEs evaluate the match between item content and job content (Schmidt, 2012; Weekley et al., 2019).
As Hickox and Roehling (2013) described it: "Content validation involves making informed judgments about the job relatedness of a predictor. Individuals with relevant expertise should conduct a job analysis to identify important tasks and the traits necessary to perform those tasks, and make judgments regarding the extent to which the predictor adequately assesses the identified trait."
Modern measurement theory treats content validity not as a separate "type" of validity but as one form of evidence within a unified validity framework. Weekley et al. (2019) demonstrated this directly by showing a strong relationship (r ~ .50) between SME importance ratings of KSAOs (content validity evidence) and the actual criterion-related validity of tests measuring those same KSAOs. In other words, when experts judge that a skill is important for the job, tests of that skill tend to predict performance — supporting the connection between content and criterion evidence.
Importantly, the quality of content validity evidence depends on who is providing the expert judgment. Weekley et al. (2019) found that the most accurate job analysis ratings came from supervisors and from individuals who reported knowing the job extremely well — not simply from those with longer tenure or broader industry experience.
Common Misconceptions
A common error is treating "face validity" — whether an assessment looks relevant — as equivalent to content validity. A test can appear job-related without actually being so, and conversely, a well-designed assessment may not obviously resemble the job but may capture the underlying skills that drive performance. Content validity requires systematic evidence of the link between assessment content and job requirements, not just surface-level resemblance.
How This Connects to Better Hiring
Content validity is what separates a meaningful reference check from a meaningless one. When reference questions are derived from a job analysis of workplace behaviors — and mapped to defined performance dimensions — the resulting data has content validity. When questions are generic ("What are their strengths and weaknesses?"), the content has no demonstrable connection to what the job requires. Every assessment discussed in this glossary — from behavioral observation scales to standardized assessments — depends on content validity as the foundation for job relevance.