Gaming the Rankings: How Some Colleges Manipulate Data
When Claremont McKenna College admitted in 2012 that it had submitted inflated SAT scores to U.S. News & World Report for six consecutive years, it handed the higher education world a useful case study in something that had been whispered about for decades. Data manipulation in college rankings is not a fringe phenomenon — it is a documented, recurring practice with identifiable mechanics, clear incentives, and real consequences for students who make decisions based on numbers someone quietly adjusted.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (how manipulation moves through a system)
- Reference table or matrix
Definition and scope
Rankings manipulation refers to deliberate institutional actions that alter the data submitted to ranking organizations — or that reshape measurable institutional characteristics — specifically to improve a school's position in published rankings rather than to improve the underlying educational experience. The scope is broader than outright fraud: it includes legal but strategically distorted reporting, enrollment policy changes made primarily for metric optimization, and the selective use of survey instruments to inflate peer assessment scores.
U.S. News & World Report, which publishes the most-cited U.S. college rankings, draws on roughly 17 statistical measures weighted across categories including graduation rates, faculty resources, financial resources, and student selectivity (U.S. News, Methodology Overview). Because each of those categories is a target, manipulation can take as many forms as there are inputs.
The practice is documented across institution types — large research universities, small liberal arts colleges, and law schools. The U.S. News law school rankings faced particular scrutiny after a wave of disclosure in 2022 and 2023, when schools including Columbia University Law School, Georgetown University Law Center, and the University of Illinois College of Law acknowledged data reporting errors or irregularities.
Core mechanics or structure
Manipulation typically operates through one of four structural pathways.
1. Direct data misreporting. The simplest form: submitting a number that is false. Claremont McKenna's inflated SAT medians fit here, as did Iona University's acknowledgment in 2011 of misreported graduation rates, student-faculty ratios, and acceptance rates. These are detectable when the submitted figures diverge from data filed with the federal Integrated Postsecondary Education Data System (IPEDS), which is administered by the National Center for Education Statistics (NCES, IPEDS).
2. Metric-targeted enrollment policy. A school raises its selectivity score by discouraging applications it expects to reject — thereby keeping its acceptance rate artificially high — or by aggressively encouraging applications from students it never intends to admit. Conversely, some schools push borderline-admitted students toward community college transfer pathways to keep entering class credentials higher on paper.
3. Reclassification of students and expenditures. Faculty counts can be padded by reclassifying staff. Part-time instructors become full-time equivalents in ways that misrepresent the student-to-faculty ratio. Similarly, expenditures-per-student figures — a proxy for educational investment — can be inflated by routing non-instructional spending through academic cost centers.
4. Peer assessment score cultivation. Roughly 20% of the U.S. News score comes from reputational surveys sent to presidents, provosts, and deans at peer institutions. Some schools run organized campaigns to improve these scores, including targeted mailings and strategic framing of institutional achievements designed specifically for survey respondents — a practice that sits in a gray zone between legitimate outreach and coordinated score inflation.
Causal relationships or drivers
The incentive structure is not subtle. A single rank position change — moving from 11th to 10th — can shift application volume, yield rates, and eventually donor behavior. A 2014 study published in Research in Higher Education (Springer) found that a one-position improvement in U.S. News rankings was associated with approximately a 1% increase in applications, with larger effects at the top of the distribution.
State appropriations, endowment performance, and faculty recruitment all carry informal connections to perceived prestige. For administrators whose performance reviews reference rankings outcomes, the incentive to manage inputs is structural rather than personal. The problem, in other words, is not that higher education is uniquely populated by dishonest people — it is that the measurement architecture creates rational incentives for behavior that degrades the measurement itself.
IPEDS cross-referencing provides the most reliable check, but U.S. News relies on self-reported data as a primary input (U.S. News, Data Submission). Audit mechanisms exist but are not universal, and inconsistencies often surface only when a whistleblower inside the institution comes forward — as occurred at Columbia University in 2022, when law professor Michael Thaddeus published a detailed analysis documenting what he described as implausible figures in Columbia's self-reported data.
The broader dynamics of college rankings — why they exert this level of institutional pressure — are explored in the college rankings overview.
Classification boundaries
Not every metric-improvement strategy constitutes manipulation. The line runs between actions that change what is being measured versus actions that change how a real thing is reported.
Legitimate optimization includes genuinely improving graduation support services to raise completion rates, hiring more full-time faculty to reduce class sizes, or increasing merit aid to improve yield among high-credential students. These change the underlying reality and the number simultaneously.
Borderline gaming includes practices like accepting more transfer students (who often aren't counted in first-year retention metrics) or structuring test-optional policies specifically to suppress the reporting of lower scores while maintaining the ability to use those scores in other admissions decisions.
Clear manipulation includes submitting figures known to be false, reclassifying expenditures to inflate per-student spending without actual resource changes, or selectively omitting students from cohort calculations to improve graduation rate metrics.
The American Council on Education (ACE) has noted that the absence of mandatory third-party auditing of submitted ranking data creates a structural gap — institutions operate largely on the honor system (ACE, Higher Education Policy).
Tradeoffs and tensions
Ranking organizations face a genuine dilemma: transparency about methodology creates targets. Publishing the exact weight of each variable is necessary for credibility — but it also functions as a road map for manipulation. U.S. News publishes its methodology in detail, which is both intellectually honest and operationally useful for anyone who wants to game the system.
The institutions themselves exist in a competitive equilibrium. If 3 schools in a peer group are gaming the student-faculty ratio and a 4th school does not, the 4th school's ranking drops relative to its genuine quality. This creates pressure that is structurally similar to a prisoner's dilemma: the rational individual move degrades the collective outcome.
There is also a legitimate question about what rankings actually measure. The U.S. News formula has been criticized by organizations including the Education Trust for weighting inputs — like SAT scores and per-student spending — more heavily than outcomes like economic mobility or post-graduation earnings (The Education Trust). If the metric is poorly designed, gaming it may produce a perverse result: schools that game their way up the rankings may actually become less focused on student outcomes as they redirect resources toward metric-optimized activities.
Common misconceptions
Misconception: Only low-ranked schools manipulate data. The documented cases include Columbia (historically ranked in the top 5 for national universities), Claremont McKenna (a highly selective liberal arts college), and Georgetown Law (a top-14 law school). Prestige does not eliminate the incentive — it may increase it, because the reputational stakes at the top of the distribution are higher.
Misconception: Test-optional policies are inherently manipulative. Test-optional admissions represent a genuine policy position supported by research — including work by the National Center for Fair and Open Testing (FairTest) — on differential test-score performance across demographic groups. The manipulation version is a specific subset: using test-optional status to suppress low scores from ranking calculations while quietly factoring those scores into admissions decisions.
Misconception: Rankings organizations can easily detect fraud. U.S. News has stated that it conducts statistical audits and cross-references data against IPEDS. However, sophisticated reclassification strategies — particularly those involving expenditure routing or adjunct faculty counting — do not necessarily produce figures that conflict with other federal filings. Detection depends heavily on internal whistleblowers or investigative journalism.
Misconception: Disclosed irregularities result in major penalties. Schools that have acknowledged data errors have typically been penalized with a one-year rankings exclusion or a downward correction — not permanent disqualification. This is a structural issue: the deterrent effect of the consequence shapes the expected value calculation for institutions weighing whether to manipulate.
Checklist or steps (how manipulation moves through a system)
The following sequence describes how data manipulation typically progresses through an institution — not as a prescription, but as a documented pattern useful for identifying warning signs.
- Metric identification — Rankings methodology is studied to identify high-weight variables with the most reporting flexibility (e.g., student-faculty ratio, expenditures per student).
- Reporting gap analysis — Current submitted figures are compared against the theoretical maximum achievable through reclassification or selective counting.
- Reclassification decisions — Administrative, legal, or financial staff adjust how existing resources are categorized in reporting systems (e.g., reclassifying staff as instructional faculty, routing library or research spending through academic cost centers).
- Data submission — Adjusted figures are submitted to the ranking organization, often by an institutional research office that may or may not have visibility into decisions made upstream.
- Cross-check avoidance — Submitted figures are reviewed for plausibility against IPEDS filings to ensure no obvious discrepancy triggers a flag.
- Score improvement and reinforcement — A rankings improvement validates the approach internally, and the practice becomes embedded in annual reporting cycles.
- Detection event — A discrepancy surfaces through whistleblower disclosure, investigative analysis (as in Thaddeus's Columbia review), or cross-referencing by a journalist or competitor institution.
- Disclosure and correction — The institution acknowledges "reporting errors" and submits corrected figures, typically framing the issue as an internal process failure rather than intentional manipulation.
Reference table or matrix
| Manipulation Type | Mechanism | Detection Method | Notable Documented Case |
|---|---|---|---|
| SAT/ACT score inflation | Submitting medians above actual enrolled class scores | IPEDS cross-reference, internal audit | Claremont McKenna College (2012) |
| Graduation rate misreporting | Omitting non-traditional or transfer students from cohorts | IPEDS cohort tracking | Iona University (2011) |
| Student-faculty ratio padding | Reclassifying part-time or administrative staff as instructional | Faculty salary/contract audit | Multiple law schools (2022–2023) |
| Expenditure reclassification | Routing non-instructional spending through academic budgets | Financial statement reconciliation | Columbia University (2022, disclosed by Thaddeus analysis) |
| Acceptance rate distortion | Encouraging applications from unqualified pools to inflate application volume | Yield rate cross-analysis | Widely documented, no single flagship case |
| Peer assessment cultivation | Organized outreach campaigns targeting survey respondents | Survey response pattern analysis | Reported structurally; no single public case confirmed |
The pattern across all six types is consistent: the manipulation exploits the gap between what an institution actually is and what it reports itself to be — a gap that persists as long as self-reporting without mandatory third-party verification remains the standard.
References
- U.S. News & World Report — Best Colleges Ranking Methodology
- National Center for Education Statistics — IPEDS (Integrated Postsecondary Education Data System)
- American Council on Education (ACE) — Higher Education Policy
- The Education Trust — Higher Education Research and Policy
- National Center for Fair and Open Testing (FairTest)
- Springer — Research in Higher Education Journal