Direct Answer

Multi-source feedback — sometimes called 360-degree feedback — collects evaluations of a person from multiple perspectives: supervisors, peers, direct reports, and sometimes clients or other stakeholders. Rather than relying on a single person's opinion, it combines several viewpoints to build a more complete and accurate picture of someone's workplace behavior.

Why It Matters

Any single rater sees only part of the picture. A manager may know how someone handles upward communication but have little visibility into how they collaborate with peers. A colleague may see daily work habits that a supervisor never observes. By combining perspectives, multi-source feedback captures aspects of performance that no single source can assess alone.

This is especially relevant for reference checking. A reference from one former manager provides one perspective. References from multiple people — in different roles and different working relationships with the candidate — provide a richer, more reliable picture.

The Science Behind It

The value of multi-source ratings is well established. Connelly and Ones (2010) conducted a comprehensive meta-analysis showing that observer-rated personality — ratings from people who know someone — predicts job performance substantially better than self-reports. When multiple observers are combined, the accuracy increases further due to the aggregation of independent perspectives.

Hoffman et al. (2010) examined the role of rater source effects — systematic differences in how supervisors, peers, and subordinates rate the same person. They found that these source effects explain a meaningful proportion of variance in ratings, which means that different sources genuinely see different things, not just the same things with different levels of accuracy. This is a critical finding: it means adding perspectives does not just reduce noise, it adds information.

Oh and Berry extended this logic to selection contexts, finding that adding peer and subordinate ratings to supervisor ratings increased operational validity by 50–74%. In other words, the additional perspectives did not merely confirm what the supervisor already said — they contributed unique predictive value.

Hedricks et al. (2013) demonstrated this principle in reference checking specifically. Their web-based multisource reference system — collecting structured ratings from multiple reference providers — achieved internal consistency reliability of α = .96–.98 and criterion-related validity of r = .35 with supervisory performance ratings. This level of validity matches or exceeds many traditional selection methods.

Common Misconceptions

A common concern is that collecting feedback from multiple sources just averages out meaningful differences into a bland middle. The research suggests the opposite: aggregation reduces random error while preserving real signal. Each source contributes unique information, and combining them produces a more accurate composite than any individual source — much like how multiple witnesses to an event collectively reconstruct a more accurate account than any one witness alone.

How This Connects to Better Hiring

Multi-source feedback is the empirical rationale for collecting references from more than one person. The science is clear: each additional reference provider adds unique information that improves the accuracy and fairness of the assessment. This is why confidence in reference-based assessment increases with the number of independent sources — moving from adequate to strong as more perspectives are gathered.