Reputable Rankers: A Guide to Business School Rankings
When prospective students choose business schools to which they will apply, few criteria seem as critical to these candidates as the ranking of a master’s in business administration (MBA). Many applicants seek out rankings as soon as they decide to attend business school, but with so many different rankings, which make sense for applicants to consider?
A 2015 survey found that rankings amounted to the top criteria prospective students considered when selecting MBA programs. The Association of International Graduate Admissions Consultants (AIGAC) survey reported that applicants awarded an average of 16 out of 100 points to rankings, substantially more than the next closest factors: career impact (11 points) and career placement statistics (nine points). Rankings were twice as valuable as a school’s alumni network, culture, or location, and four times as important as a host of other factors that included recommendations from friends and relatives.
The guide below presents an overview of the rankings most business school candidates examine to help select schools, along with recommendations on how to effectively use the lists.
Are MBA Program Rankings Credible?
Rankings as an applicant’s top criteria seem surprising given the recent criticisms of business school program rankings as misleading. For example, the faculty from 20 business schools collaborated on a 2017 research paper published in the Decision Sciences Journal criticizing rankings. The professors studied the methodologies behind the rankings and disputed the oversimplification of the reasons driving students to business schools.
While the research paper brought to light new issues around school rankings, the controversy itself is not new. Business school applicants need to understand the debate because the issues affect the usefulness of these rankings in selecting the schools to which they will apply. Most criticisms focus on two main charges:
- Current rankings weigh factors like graduates’ income disproportionately in ranking models, and
- Flawed methodologies result in inconsistent rankings and high vulnerability to abuses by schools who “game” the system.
Disproportionate Emphasis on Compensation
One of the leading experts on graduate management education argues that income garners a disproportionate emphasis in ranking models: “Rankings put far too much emphasis on job placement and compensation and far little on the MBA experience and the business learning that goes on in a program,” the editor-in-chief of Poets & Quants John A. Byrne told MBA Crystal Ball.
Byrne knows what he’s talking about. Decades ago, he launched Bloomberg Businessweek’s business school rankings as the publication’s executive editor. Rankings from the U.S. News and World Report and Forbes are particularly vulnerable to this disproportionate emphasis charge, with the Forbes rankings focusing 100 percent on compensation-related factors.
Furthermore, the methodologies are not consistent. Unfortunately, a consensus does not exist on how best to measure the performance of business schools. Accordingly, the methods radically differ among media outlets, and in some cases like the Bloomberg Businessweek rankings, these approaches can differ so markedly from year to year that comparisons over time are not valid.
By contrast, one prominent consistency does exist about which the public knows little. As Byrne points out to MBA Crystal Ball, this consistency involves a universal credibility weakness common to all rankings that survey the opinions of students and alumni:
If a publication surveys students or graduates, the schools ask their constituents to remember that their answers on these surveys impact the reputation of the brand they now put on their resume. So students and graduates don’t fill these surveys out with the kind of honesty and candor you would expect.
[…] It makes little difference if there are statisticians looking over the results for suspicious patterns in the data, as Businessweek has done for years, or occasional audits of school-provided data, as The Financial Times does.
If someone wants to game a ranking, it’s pretty easy to do so–and even easier to get away with it because the publications that crank out these lists take no responsibility for gaming and rarely call it out.
Some rankings lack credibility because they cannot compensate for the bias inherent in the opinions of current students and alumni who know that their response will determine their schools’ rankings. All of the rankings covered below except for the U.S. News and Forbes rankings—neither of which poll the opinions of students or alumni—suffer from this credibility gap.
Nevertheless, sometimes schools do get caught cheating. In one recent well-publicized case, Temple University fired their business school’s dean in June 2018 for knowingly submitting false data to the U.S. News and World Report. That false information made Temple the #1 online MBA program on the U.S. News and World Report’s list.
Which Business School Rankings are the Most Important?
The AIGAC study asked this question when compiling information: “Which business school rankings did you refer to when researching and learning about business school programs?” The results include:
- U.S. News and World Report – 57%
- Businessweek – 57%
- Financial Times – 52%
- Poets & Quants – 41%
- The Economist – 38%
- Forbes – 35%
Many believe that these amount to the six most influential rankings among prospective applicants. A study on these rankings follows.
U.S. News and World Report
The U.S. News and World Report, like The Economist, attempts to balance detailed quantitative data with subjective survey opinions. What is remarkable about the U.S. News ranking—and what many applicants do not understand—is that 40 percent of the ranking relies on reputation: 25 percent for the survey opinions of the faculty at competing business schools and 15 percent for corporate recruiters. In other words, the publication bases a substantial two-fifths of the ranking on a popularity contest. The full methodology follows.
- Program quality (40 percent), which consists of peer assessment score (25 percent) and recruiter assessment (15 percent)
- Placement success (35 percent), which includes the average MBA salary (14 percent) and employment rates (12 percent), at graduation (7 percent) and three months after graduation (14 percent)
- Student selectivity (25 percent), which covers average GMAT scores (16.25 percent), average undergraduate GPA (7.5 percent) and acceptance rate (1.25 percent)
Gareth Howells of the London Business School pointed out to Forbes that another way of interpreting the U.S. News rankings is that half of the rankings depend on career placement success. That is because the U.S. News weighs salary and employment statistics at 35 percent, along with that 15 percent for recruiter feedback.
Forbes cares only about the money—and is not hesitant to admit that fact. The publication’s model focuses exclusively on the opportunity cost-adjusted earnings increase from receiving an MBA degree that we discuss in other BSchools.org guides, usually referenced by various terms of art like “salary boost,” “salary uplift,” or “salary bump.”
The methodology focuses exclusively on a five-year “MBA Gain,” defined as the comparison between the net cumulative amount of a professional’s earnings five years after obtaining an MBA and that of the professional’s five-year earnings in their pre-MBA career. That said, Forbes does apply a meticulous accounting approach to this calculation that also factors in non-salary income along with all kinds of costs like geographic cost-of-living adjustments. The model even includes the time value of money calculations that discount future income and costs back to the present day.
Overall, however, the Forbes approach endures widespread criticism as too narrowly focused on only the financial benefits accruing from each business school’s MBA degree.
Not much consistency exists in Businessweek’s ranking systems. The publication appears to have restructured its methodology three times in the last five years, making year-over-year trend comparisons impossible.
The current model relies heavily (80 percent) on qualitative survey opinions from alumni, students, and recruiters, with only 20 percent allocated to school-provided quantitative values like placement rates and starting salaries. The full methodology includes:
- Employer survey (35 percent)
- Alumni survey (30 percent), which includes the increase in median compensation, job satisfaction, and MBA feedback (each 10 percent)
- Student survey (15 percent)
- Job placement rate (10 percent)
- Starting salary (10 percent)
Additionally, Howells emphasized to Forbes that Businessweek’s rankings tend to rely on the 35 percent-weighed reputation scores supplied by employers, instead of the reputation scores provided by deans and faculty. Businessweek affords an almost equivalent weight of 45 percent to alumni and student opinions, which the U.S. News does not consider at all.
As with the Financial Times and Economist rankings, small sample sizes and flawed sample makeups recently produced some curious, well-publicized rankings anomalies. In one such case in 2016, Businessweek ranked Rice University ahead of the University of California at Berkeley, Northwestern University, Columbia University, and Yale University. Even though Byrne started Businessweek’s ranking years ago, he called the current ranking situation at Businessweek “a chaotic mess.”
More than half of the Financial Times ranking derives from salary and career measures, with 40 percent accounting for salary alone. An admirable characteristic of the ranking is it focuses on the portion of female, faculty, and advisory board members. However, the remaining indicators target a curious hodgepodge that seems to bear little direct relevance to the evaluation of an MBA program. The full methodology encompasses:
- Salary (40 percent), which includes weighted salary and salary increase (each 20 percent)
- Job placement, which covers career progress (3 percent), aims achieved (3 percent), placement success (2 percent), and employed at three months (2 percent)
- Value for money (3 percent)
- Alumni recommend (2 percent)
- Gender gap (5 percent), which includes female faculty (2 percent), female students (2 percent), and the number of women on advisory boards (1 percent)
- International representation (20 percent), which encompasses international faculty and students (each 4 percent), as well as international board members (2 percent), international mobility (6 percent), international course experience (2 percent), and extra languages required for graduation (2 percent)
- Higher degrees (10 percent), which is broken down between the number of faculty with doctorate degrees and doctoral students or graduates hired (each 5 percent)
- Financial Times research rank (10 percent), calculated according to the number of articles published in journals by faculty.
While the sheer number of indicators included in this ranking is much higher than that of the previous rankings, it should be noted that the FT model omits student quality markers, such as undergraduate grade point averages and GMAT scores, yet includes faculty research, doctorates awarded, and a 20 percent weighting for international representation. Byrne does not mince words with his opinion to MBA Crystal Ball:
The FT ranking is biased against U.S. schools so that the British newspaper can get more advertising money from European schools. It includes many metrics that have nothing to do with quality and everything to do with political correctness or simply having a way to make the schools and programs of lesser quality appear more competitive with the best U.S. programs.
Many prospective students assume that because of the Economist’s outstanding journalism, they can expect similar high standards from the publication’s MBA program rankings. Unfortunately, many of the issues that plague the FT rankings apply to this other British publication as well. The full methodology follows:
- Career opportunities (35 percent), which is evenly divided between the diversity of recruiters’ industry sectors, the placement success after graduation, and student assessments of career services (each 11.66 percent)
- Personal development and educational experience (35 percent), also evenly broken down into four factors (each 8.75 percent):
- Faculty quality, which includes faculty-to-student ratio, the percentage of students with doctorates, and students’ rating
- Student quality, which covers GMAT score, years of work experience, and prior average salary
- Student diversity, which comprises national diversity, gender diversity, and student rating of culture and classmates
- Educational experience, which encompasses students ratings of core and elective courses, overseas study, language courses, and facilities and services
- Salary increase (20 percent), which includes post-MBA salary and salary boost
- Networking potential (10 percent), which is divided between alumni-to-student ratio, the number of overseas alumni chapters, and students’ rating of the alumni network
Almost two-thirds of the Economist’s ranking relates to salary, career placement, and support, and networking. Although, the remaining 35 percent accounts for a more relevant assortment of factors than FT’s. However, The Economist has been criticized for rankings anomalies and wild fluctuations. One report wonders how in 2016 the University of Queensland could have ranked above both the University of Pennsylvania and Columbia University, or how one business school might have plummeted 33 positions within a single year. As with his opinion about the Financial Times, Byrne called the Economist ranking a joke and “an embarrassment to a great media brand that publishes one of the best magazines in the world.”
Poets & Quants
Because the five leading rankings seem so flawed in so many ways, it makes sense that a meta-analysis composite ranking of the five leading rankings might be useful. Poets & Quants introduced just that six years ago. The publication’s ranking assesses of each of the above ranking’s value and weighs them accordingly:
- U.S. News and World Report (35 percent)
- Forbes (25 percent)
- Bloomberg Businessweek (15 percent)
- Financial Times (15 percent)
- The Economist (10 percent)
The primary advantage of a composite is efficiency and convenience. A composite list displays a particular school’s rankings among the most influential rankings. That way, prospective students can easily compare schools.
The second advantage is that it reduces the effects of anomalies related to statistical sampling and inference. Often, the kinds of anomalies like those cited in three of the rankings above have little to do with educational quality, but those with little understanding of statistical methods interpret those “outliers” to be more important than they probably are. The composite list can also mitigate anomalies due to bias, dishonesty, and even cheating.
Third, a consensus boosts confidence. Students need to evaluate a school’s performance in context from multiple analyses. Schools ranked among the top ten by four of the five rankings—despite differing methodologies—inspire confidence that they are truly schools that deserve that top ten status.
How to Use the Rankings
For most school-shopping prospective MBA applicants, the Poets & Quants list is a reasonable first step. This composite ranking can meet prospective candidates’ needs and overcome many of the deficiencies highlighted by this article.
Despite the drawbacks, those who wish to focus on the individual rankings might consider additional points. Specifically, because of the differences in the rankings’ emphasis, those who seek traditional highly-paid MBA jobs in management consulting and investment banking might focus on the programs that the U.S. World & News or Forbes ranks well.
Those interested in the overall satisfaction of students and alumni might pay particular attention to Businessweek’s rankings, and those interested in international business would be better off exploring the Financial Times rankings.
As a final note, candidates should also consider that many schools brag on their websites about their great rankings. However, because of the proliferation of so many rankings these days, almost all schools will rank well on some lists. More useful than absolute numerical ranks would be the diverging perspectives of specific evaluators, paying thought to which variables a prospective student values most.