College Ratings: We Want Less

Volume III, #23

In college, I had this sign on the door to my room, “borrowed” from the serving line in the dining hall:

My roommates understood I wasn’t talking about portion size, but rather a shared tendency to overdo things. One of my roommates developed what he called “keg theory.” Keg theory went like this: If we’re going to have a few beers, we might as well get a case. And if we’re going to get a case, we might as well get a pony keg. And if we’re going to go through the trouble of getting a pony keg, we might as well get a keg. Needless to say, this slippery slope logic tended to have a negative impact on our health.

So did the pranks we would pull on each other. One roommate would crawl out on a steep roof just to lob pats of butter onto my skylight. The greasy smears never entirely disappeared. Another took retribution for my putting grains of rice in his bed by setting my “If you want less, tell me” sign on fire. (See charring, bottom left.) Looking back, it’s amazing we survived.

***

These days President Obama might agree with the proposition that overdoing things can be hazardous to health – or at least to healthcare reform. The Government’s flawed implementation of the ambitious federal online exchange for the Affordable Care Act has been a significant setback for the important cause of expanding coverage. While President Obama talks at length about a “smarter, more effective government,” his administration has failed to execute on the public face of its most important and ambitious domestic priority.

So I can’t help thinking back to August 22 when, speaking at Henninger High School in Syracuse, NY, President Obama announced the federal government would enter the college ratings business:

“We’re going to use these ratings, we hope, by working with Congress to change how we allocate federal aid for colleges. We’ve got to stop subsidizing schools that are not getting good results, start rewarding schools that deliver for the students and deliver for America’s future. That’s our goal.”

This college ratings effort has a few things in common with the Obamacare Web site. First, it’s not uncontroversial. Second, it will require pulling data from other federal agencies, such as the IRS, social security and other sources, which creates complexity. Third, it launches the federal government into a brand new sphere.

But unlike the health insurance Web site, which hadn’t existed previously, there is no shortage of private companies and organizations in the college ratings business. By my count, there are 15 in the U.S. and another 16 globally. They are all expert at extracting available data, grouping institutions into like categories, determining how much weight to give each variable, and creating a consumer product.

Unfortunately, ED doesn’t have much of track record in these domains. In particular, determining which inputs and outputs are most relevant for different categories of institutions – research universities and community colleges will likely have different metrics – is an ambitious project unto itself. President Obama might have a better chance of convincing Congress to tie federal financial aid to ratings of one kind or another now, before ED releases ratings that are certain to be controversial. (Imagine if the Administration had developed the Web site first and then attempted to persuade Congress to pass Obamacare. We’d have NObamacare.)

So if he continues on the current course, Secretary Duncan may find himself in front of a Congressional committee next year wishing he’d hung the “If you want less, tell me” sign on his office door.

***

Nevertheless, there is an important role ED and the federal government should be playing in all this. There are two distinct elements of the ratings effort:
1. Ascertaining which input and output data are relevant, accurate and attainable for measuring the performance of different categories of higher education institutions
2. Establishing ratings for these categories based on this data

ED is conflating these two activities, urging critics to not let the perfect be the enemy of the good as far as data is concerned. “The data is always imperfect,” Secretary Duncan was recently quoted as saying. “We will use the best data we have.” In the spirit of the Lean Startup, Secretary Duncan promises that ED will update the ratings as better data become available.

Secretary Duncan has had to respond to questions about data because everyone in higher education knows we simply don’t have good data. For example, current graduation data only counts first-time, full-time students. In response, ED has proposed a “datapalooza” in the early spring to look at better ways to package and provide access to existing federal data. But this doesn’t address the fundamental issue that part-time and transfer students aren’t included in graduation rates. In a separate announcement last year, ED announced it intends to make this change. But there’s no timetable or process for doing so.

To be sure, ED is hamstrung by the fact that the 2008 reauthorization of the Higher Education Act forbade the creation of a federal unit record database. In response, the Administration has offered states funding to construct their own longitudinal databases. Tennessee, Illinois, Mississippi, Texas and Arkansas have made headway in this regard, although they’ll miss the many students who traverse state lines to complete college.

But rather than overzealously attempting to launch ratings next spring, ED should focus on getting better data for everyone, including the dozens of incumbent ratings systems. The Voluntary Institution Metrics Project, a Gates Foundation-backed initiative, has identified three key roadblocks that need to be overcome in the absence of a federal unit record database: first, the burden of tracking down students to see if they’ve completed at other institutions; second, tracking graduates’ income and employment status, which requires pulling information from unemployment insurance databases – something that’s only currently possible in a few states; and third, for student learning, there simply isn’t enough standardized testing done on college students to correlate assessment results to student performance.

This may seem like a tall order, but ED needn’t take sole responsibility. Amazingly, colleges and universities themselves have been left off ED’s ratings workplan. ED should apply its thinking and funding to encourage higher education institutions to train some of their research firepower in education, economics, statistics and applied mathematics on their own institutions, and come with innovative solutions to these challenges.

If ED can do this rather than rolling out its own ratings according to a political calendar, and then allow existing ratings providers to access the new data, we’ll end up with exactly the useful, consumer-friendly and popular ratings that President Obama is after. At that point, the idea of linking federal financial aid to ratings will be wholly uncontroversial. Which would be a nice change for this Administration.

University Ventures (UV) is the premier investment firm focused exclusively on the global higher education sector. UV pursues a differentiated strategy of ‘innovation from within’. By partnering with top-tier universities and colleges, and then strategically directing private capital to develop programs of exceptional quality that address major economic and social needs, UV expects to set new standards for student outcomes and advance the development of the next generation of colleges and universities on a global scale.

Comments