Inevitably, most rankings are based on raw data, but the way that data are calculated and weighted varies significantly between publishers. Sometimes publishers receive the data directly from schools, as is the case with U.S. News & World Report. Others rely on data drawn from university Web sites, research foundations or academic organizations. For example, Vanguard's college rankings, which focus on faculty quality, rely on data from the National Research Council.
When examining college rankings, it's important to look at what data the publication used and how it used the data. Many publications use other data sources or their own specialized surveys.
Data that are commonly used in rankings include:
- SAT and ACT scores of incoming students
- Students' high school GPAs
- Acceptance rate
- Alumni donations
- Student-to-faculty ratio
- Graduation rate
- Financial aid
- Transfer rate (also called student retention)
- Average class size
- Quality of faculty, which may be measured by research grants and prizes awarded and the frequency of publications, among other factors.
- Results from surveys completed by students or administrators
Although it's clear that ranking methods differ between publications, some use more unusual criteria to determine college standings. For example, Washington Monthly, a political magazine, says that its rankings -- echoing John F. Kennedy -- "ask what colleges are doing for the country" rather than "what colleges can do for you." This list focuses on how schools contribute to "social mobility," or raising people up from poverty, as well as how they promote "an ethic of service to country" and pursue "scientific and humanistic research" [source: Washington Monthly]. The Washington Monthly rankings also focus on how taxpayer money, such as in federal research grants, is used, and whether they consider that money well spent.
Of course, the Internet now holds some influence over rankings, both in how they're calculated and how they're publicized. Some companies provide additional college and university information on subscription-only sites. Other organizations collect data from nontraditional sources like a school's number of Google hits and links to the university's Web site from the sites of other universities. This method of ranking is often called the G-Factor.
Any discussion of how rankings are compiled inevitably leads back to U.S. News & World Report. For reasons that will be discussed later in the article, its rankings attract a lot of controversy. Let's look at how they rank schools.
U.S. News presents each school with a numerical score and ranks them accordingly, with schools divided into separate categories. They are:
- National universities
- Liberal arts colleges
- Master's universities
- Comprehensive colleges
- Business programs
- Engineering programs
The magazine provides further rankings for some categories based on region or if a school awards doctoral degrees.
In calculating each score, U.S. News relies on data supplied to them by the schools they're ranking. Each piece of data is measured differently in calculating the overall score. The composition of a score given to a school by U.S. News is as follows:
- 5 percent: alumni donations
- 5 percent: graduation rate (for liberal arts and national universities)
- 10 percent: financial aid
- 15 percent: faculty resources (which is a collection of factors like average class size and student-to-teacher ratio)
- 15 percent: acceptance rate
- 20 percent or 25 percent (depending on the school type): student retention
- 25 percent: peer assessment of the performance of other schools, performed by the top three officials of each school
That last part, the peer assessment, is the trickiest bit -- and it's one of the big reasons that U.S. News & World Report is one of the chief targets of the campaign against college rankings. In the next section, we'll look at some common criticisms of college rankings.